Science.gov

Sample records for adaptive wavelet collocation

  1. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  2. Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability

    NASA Astrophysics Data System (ADS)

    Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.

    2010-12-01

    Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.

  3. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  4. Spatially-Anisotropic Parallel Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Brown-Dymkoski, Eric

    2015-11-01

    Despite latest advancements in development of robust wavelet-based adaptive numerical methodologies to solve partial differential equations, they all suffer from two major ``curses'': 1) the reliance on rectangular domain and 2) the ``curse of anisotropy'' (i.e. homogeneous wavelet refinement and inability to have spatially varying aspect ratio of the mesh elements). The new method addresses both of these challenges by utilizing an adaptive anisotropic wavelet transform on curvilinear meshes that can be either algebraically prescribed or calculated on the fly using PDE-based mesh generation. In order to ensure accurate representation of spatial operators in physical space, an additional adaptation on spatial physical coordinates is also performed. It is important to note that when new nodes are added in computational space, the physical coordinates can be approximated by interpolation of the existing solution and additional local iterations to ensure that the solution of coordinate mapping PDEs is converged on the new mesh. In contrast to traditional mesh generation approaches, the cost of adding additional nodes is minimal, mainly due to localized nature of iterative mesh generation PDE solver requiring local iterations in the vicinity of newly introduced points. This work was supported by ONR MURI under grant N00014-11-1-069.

  5. Adaptive Multilevel Second-Generation Wavelet Collocation Elliptic Solver: A Cure for High Viscosity Contrasts

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. N.; Vasilyev, O. V.; Yuen, D. A.

    2003-12-01

    An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is developed. The method is based on the general class of multi-dimensional second generation wavelets and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems. Wavelet decomposition is used for grid adaptation and interpolation, while O(N) hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the iterative solver, an iterative procedure analogous to the multigrid algorithm is developed. For the problems with slowly varying viscosity simple diagonal preconditioning works. For problems with large laterally varying viscosity contrasts either direct solver on shared-memory machines or multilevel iterative solver with incomplete LU preconditioner may be used. The method is demonstrated for the solution of a number of two-dimensional elliptic test problems with both constant and spatially varying viscosity with multiscale character.

  6. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  7. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  8. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  9. Adaptive Wavelet Transforms

    SciTech Connect

    Szu, H.; Hsu, C.

    1996-12-31

    Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.

  10. Feasibility of using Hybrid Wavelet Collocation - Brinkman Penalization Method for Shape and Topology Optimization

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Gazzola, Mattia; Koumoutsakos, Petros

    2009-11-01

    In this talk we discuss preliminary results for the use of hybrid wavelet collocation - Brinkman penalization approach for shape and topology optimization of fluid flows. Adaptive wavelet collocation method tackles the problem of efficiently resolving a fluid flow on a dynamically adaptive computational grid in complex geometries (where grid resolution varies both in space and time time), while Brinkman volume penalization allows easy variation of flow geometry without using body-fitted meshes by simply changing the shape of the penalization region. The use of Brinkman volume penalization approach allow seamless transition from shape to topology optimization by combining it with level set approach and increasing the size of the optimization space. The approach is demonstrated for shape optimization of a variety of fluid flows by optimizing single cost function (time averaged Drag coefficient) using covariance matrix adaptation (CMA) evolutionary algorithm.

  11. Shape Optimization for Drag Reduction in Linked Bodies using Evolution Strategies and the Hybrid Wavelet Collocation - Brinkman Penalization Method

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Gazzola, Mattia; Koumoutsakos, Petros

    2010-11-01

    In this talk we discuss preliminary results for the use of hybrid wavelet collocation - Brinkman penalization approach for shape optimization for drag reduction in flows past linked bodies. This optimization relies on Adaptive Wavelet Collocation Method along with the Brinkman penalization technique and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Adaptive wavelet collocation method tackles the problem of efficiently resolving a fluid flow on a dynamically adaptive computational grid, while a level set approach is used to describe the body shape and the Brinkman volume penalization allows for an easy variation of flow geometry without requiring body-fitted meshes. We perform 2D simulations of linked bodies in order to investigate whether flat geometries are optimal for drag reduction. In order to accelerate the costly cost function evaluations we exploit the inherent parallelism of ES and we extend the CMA-ES implementation to a multi-host framework. This framework allows for an easy distribution of the cost function evaluations across several parallel architectures and it is not limited to only one computing facility. The resulting optimal shapes are geometrically consistent with the shapes that have been obtained in the pioneering wind tunnel experiments for drag reduction using Evolution Strategies by Ingo Rechenberg.

  12. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells. PMID:26529742

  13. Volumetric Rendering of Geophysical Data on Adaptive Wavelet Grid

    NASA Astrophysics Data System (ADS)

    Vezolainen, A.; Erlebacher, G.; Vasilyev, O.; Yuen, D. A.

    2005-12-01

    Numerical modeling of geological phenomena frequently involves processes across a wide range of spatial and temporal scales. In the last several years, transport phenomena governed by the Navier-Stokes equations have been simulated in wavelet space using second generation wavelets [1], and most recently on fully adaptive meshes. Our objective is to visualize this time-dependent data using volume rendering while capitalizing on the available sparse data representation. We present a technique for volumetric ray casting of multi-scale datasets in wavelet space. Rather of working with the wavelets at the finest possible resolution, we perform a partial inverse wavelet transform as a preprocessing step to obtain scaling functions on a uniform grid at a user-prescribed resolution. As a result, a function in physical space is represented by a superposition of scaling functions on a coarse regular grid and wavelets on an adaptive mesh. An efficient and accurate ray casting algorithm is based just on these scaling functions. Additional detail is added during the ray tracing by taking an appropriate number of wavelets into account based on support overlap with the interpolation point, wavelet amplitude, and other characteristics, such as opacity accumulation (front to back ordering) and deviation from frontal viewing direction. Strategies for hardware implementation will be presented if available, inspired by the work in [2]. We will pressent error measures as a function of the number of scaling and wavelet functions used for interpolation. Data from mantle convection will be used to illustrate the method. [1] Vasilyev, O.V. and Bowman, C., Second Generation Wavelet Collocation Method for the Solution of Partial Differential Equations. J. Comp. Phys., 165, pp. 660-693, 2000. [2] Guthe, S., Wand, M., Gonser, J., and Straßer, W. Interactive rendering of large volume data sets. In Proceedings of the Conference on Visualization '02 (Boston, Massachusetts, October 27 - November

  14. A Haar wavelet collocation method for coupled nonlinear Schrödinger-KdV equations

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer; Esen, Alaattin; Bulut, Fatih

    2016-04-01

    In this paper, to obtain accurate numerical solutions of coupled nonlinear Schrödinger-Korteweg-de Vries (KdV) equations a Haar wavelet collocation method is proposed. An explicit time stepping scheme is used for discretization of time derivatives and nonlinear terms that appeared in the equations are linearized by a linearization technique and space derivatives are discretized by Haar wavelets. In order to test the accuracy and reliability of the proposed method L2, L∞ error norms and conserved quantities are used. Also obtained results are compared with previous ones obtained by finite element method, Crank-Nicolson method and radial basis function meshless methods. Error analysis of Haar wavelets is also given.

  15. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large

  16. Numerical solution of fractional differential equations using cubic B-spline wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Li, Xinxiu

    2012-10-01

    Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.

  17. Adapting overcomplete wavelet models to natural images

    NASA Astrophysics Data System (ADS)

    Sallee, Phil; Olshausen, Bruno A.

    2003-11-01

    Overcomplete wavelet representations have become increasingly popular for their ability to provide highly sparse and robust descriptions of natural signals. We describe a method for incorporating an overcomplete wavelet representation as part of a statistical model of images which includes a sparse prior distribution over the wavelet coefficients. The wavelet basis functions are parameterized by a small set of 2-D functions. These functions are adapted to maximize the average log-likelihood of the model for a large database of natural images. When adapted to natural images, these functions become selective to different spatial orientations, and they achieve a superior degree of sparsity on natural images as compared with traditional wavelet bases. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Inference with the learned model is demonstrated for applications such as denoising, with results that compare favorably with other methods.

  18. Nonlinear adaptive wavelet analysis of electrocardiogram signals

    NASA Astrophysics Data System (ADS)

    Yang, H.; Bukkapatnam, S. T.; Komanduri, R.

    2007-08-01

    Wavelet representation can provide an effective time-frequency analysis for nonstationary signals, such as the electrocardiogram (EKG) signals, which contain both steady and transient parts. In recent years, wavelet representation has been emerging as a powerful time-frequency tool for the analysis and measurement of EKG signals. The EKG signals contain recurring, near-periodic patterns of P , QRS , T , and U waveforms, each of which can have multiple manifestations. Identification and extraction of a compact set of features from these patterns is critical for effective detection and diagnosis of various disorders. This paper presents an approach to extract a fiducial pattern of EKG based on the consideration of the underlying nonlinear dynamics. The pattern, in a nutshell, is a combination of eigenfunctions of the ensembles created from a Poincare section of EKG dynamics. The adaptation of wavelet functions to the fiducial pattern thus extracted yields two orders of magnitude (some 95%) more compact representation (measured in terms of Shannon signal entropy). Such a compact representation can facilitate in the extraction of features that are less sensitive to extraneous noise and other variations. The adaptive wavelet can also lead to more efficient algorithms for beat detection and QRS cancellation as well as for the extraction of multiple classical EKG signal events, such as widths of QRS complexes and QT intervals.

  19. Adaptive wavelet methods - Matrix-vector multiplication

    NASA Astrophysics Data System (ADS)

    Černá, Dana; Finěk, Václav

    2012-12-01

    The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.

  20. Wavelet-based adaptive numerical simulation of unsteady 3D flow around a bluff body

    NASA Astrophysics Data System (ADS)

    de Stefano, Giuliano; Vasilyev, Oleg

    2012-11-01

    The unsteady three-dimensional flow past a two-dimensional bluff body is numerically simulated using a wavelet-based method. The body is modeled by exploiting the Brinkman volume-penalization method, which results in modifying the governing equations with the addition of an appropriate forcing term inside the spatial region occupied by the obstacle. The volume-penalized incompressible Navier-Stokes equations are numerically solved by means of the adaptive wavelet collocation method, where the non-uniform spatial grid is dynamically adapted to the flow evolution. The combined approach is successfully applied to the simulation of vortex shedding flow behind a stationary prism with square cross-section. The computation is conducted at transitional Reynolds numbers, where fundamental unstable three-dimensional vortical structures exist, by well-predicting the unsteady forces arising from fluid-structure interaction.

  1. Hierarchical Multiscale Adaptive Variable Fidelity Wavelet-based Turbulence Modeling with Lagrangian Spatially Variable Thresholding

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza

    The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work

  2. Adaptive wavelets for visual object detection and classification

    NASA Astrophysics Data System (ADS)

    Aghdasi, Farzin

    1997-10-01

    We investigate the application of adaptive wavelets for the representation and classification of signals in digitized speech and medical images. A class of wavelet basis functions are used to extract features from the regions of interest. These features are then used in an artificial neural network to classify the region are containing the desired object or belonging to the background clutter. The dilation and shift parameters of the wavelet functions are not fixed. These parameters are included in the training scheme. In this way the wavelets are adaptive to the expected shape and size of the signals. The results indicate that adaptive wavelet functions may outperform the classical fixed wavelet analysis in detection of subtle objects.

  3. 2D wavelet transform with different adaptive wavelet bases for texture defect inspection based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Mo, Yu L.

    1998-08-01

    There are many textures such as woven fabrics having repeating Textron. In order to handle the textural characteristics of images with defects, this paper proposes a new method based on 2D wavelet transform. In the method, a new concept of different adaptive wavelet bases is used to match the texture pattern. The 2D wavelet transform has two different adaptive orthonormal wavelet bases for rows and columns which differ from Daubechies wavelet bases. The orthonormal wavelet bases for rows and columns are generated by genetic algorithm. The experiment result demonstrate the ability of the different adaptive wavelet bases to characterize the texture and locate the defects in the texture.

  4. A New Adaptive Mother Wavelet for Electromagnetic Transient Analysis

    NASA Astrophysics Data System (ADS)

    Guillén, Daniel; Idárraga-Ospina, Gina; Cortes, Camilo

    2016-01-01

    Wavelet Transform (WT) is a powerful technique of signal processing, its applications in power systems have been increasing to evaluate power system conditions, such as faults, switching transients, power quality issues, among others. Electromagnetic transients in power systems are due to changes in the network configuration, producing non-periodic signals, which have to be identified to avoid power outages in normal operation or transient conditions. In this paper a methodology to develop a new adaptive mother wavelet for electromagnetic transient analysis is proposed. Classification is carried out with an innovative technique based on adaptive wavelets, where filter bank coefficients will be adapted until a discriminant criterion is optimized. Then, its corresponding filter coefficients will be used to get the new mother wavelet, named wavelet ET, which allowed to identify and to distinguish the high frequency information produced by different electromagnetic transients.

  5. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  6. An adaptive morphological gradient lifting wavelet for detecting bearing defects

    NASA Astrophysics Data System (ADS)

    Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng

    2012-05-01

    This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.

  7. Space-based RF signal classification using adaptive wavelet features

    SciTech Connect

    Caffrey, M.; Briles, S.

    1995-04-01

    RF signals are dispersed in frequency as they propagate through the ionosphere. For wide-band signals, this results in nonlinearly- chirped-frequency, transient signals in the VHF portion of the spectrum. This ionospheric dispersion provide a means of discriminating wide-band transients from other signals (e.g., continuous-wave carriers, burst communications, chirped-radar signals, etc.). The transient nature of these dispersed signals makes them candidates for wavelet feature selection. Rather than choosing a wavelet ad hoc, we adaptively compute an optimal mother wavelet via a neural network. Gaussian weighted, linear frequency modulate (GLFM) wavelets are linearly combined by the network to generate our application specific mother wavelet, which is optimized for its capacity to select features that discriminate between the dispersed signals and clutter (e.g., multiple continuous-wave carriers), not for its ability to represent the dispersed signal. The resulting mother wavelet is then used to extract features for a neutral network classifier. The performance of the adaptive wavelet classifier is the compared to an FFT based neural network classifier.

  8. Adaptive video compressed sampling in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi

    2016-07-01

    In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.

  9. Big data extraction with adaptive wavelet analysis (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Qu, Hongya; Chen, Genda; Ni, Yiqing

    2015-04-01

    Nondestructive evaluation and sensing technology have been increasingly applied to characterize material properties and detect local damage in structures. More often than not, they generate images or data strings that are difficult to see any physical features without novel data extraction techniques. In the literature, popular data analysis techniques include Short-time Fourier Transform, Wavelet Transform, and Hilbert Transform for time efficiency and adaptive recognition. In this study, a new data analysis technique is proposed and developed by introducing an adaptive central frequency of the continuous Morlet wavelet transform so that both high frequency and time resolution can be maintained in a time-frequency window of interest. The new analysis technique is referred to as Adaptive Wavelet Analysis (AWA). This paper will be organized in several sections. In the first section, finite time-frequency resolution limitations in the traditional wavelet transform are introduced. Such limitations would greatly distort the transformed signals with a significant frequency variation with time. In the second section, Short Time Wavelet Transform (STWT), similar to Short Time Fourier Transform (STFT), is defined and developed to overcome such shortcoming of the traditional wavelet transform. In the third section, by utilizing the STWT and a time-variant central frequency of the Morlet wavelet, AWA can adapt the time-frequency resolution requirement to the signal variation over time. Finally, the advantage of the proposed AWA is demonstrated in Section 4 with a ground penetrating radar (GPR) image from a bridge deck, an analytical chirp signal with a large range sinusoidal frequency change over time, the train-induced acceleration responses of the Tsing-Ma Suspension Bridge in Hong Kong, China. The performance of the proposed AWA will be compared with the STFT and traditional wavelet transform.

  10. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    NASA Astrophysics Data System (ADS)

    Man, Jun; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a sufficiently large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos expansion (PCE) to represent and propagate the uncertainties in parameters and states. However, PCKF suffers from the so-called "curse of dimensionality". Its computational cost increases drastically with the increasing number of parameters and system nonlinearity. Furthermore, PCKF may fail to provide accurate estimations due to the joint updating scheme for strongly nonlinear models. Motivated by recent developments in uncertainty quantification and EnKF, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected at each assimilation step; the "restart" scheme is utilized to eliminate the inconsistency between updated model parameters and states variables. The performance of RAPCKF is systematically tested with numerical cases of unsaturated flow models. It is shown that the adaptive approach and restart scheme can significantly improve the performance of PCKF. Moreover, RAPCKF has been demonstrated to be more efficient than EnKF with the same computational cost.

  11. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  12. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  13. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  14. Multiple cardiac arrhythmia recognition using adaptive wavelet network.

    PubMed

    Lin, Chia-Hung; Chen, Pei-Jarn; Chen, Yung-Fu; Lee, You-Yun; Chen, Tainsong

    2005-01-01

    This paper proposes a method for electrocardiogram (ECG) heartbeat pattern recognition using adaptive wavelet network (AWN). The ECG beat recognition can be divided into a sequence of stages, starting from feature extraction and conversion of QRS complexes, and then identifying cardiac arrhythmias based on the detected features. The discrimination method of ECG beats is a two-subnetwork architecture, consisting of a wavelet layer and a probabilistic neural network (PNN). Morlet wavelets are used to extract the features from each heartbeat, and then PNN is used to analyze the meaningful features and perform discrimination tasks. The AWN is suitable for application in a dynamic environment, with add-in and delete-off features using automatic target adjustment and parameter tuning. The experimental results obtained by testing the data of the MIT-BIH arrhythmia database demonstrate the efficiency of the proposed method. PMID:17281539

  15. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data. PMID:18255433

  16. Adaptive window-length detection of underwater transients using wavelets.

    PubMed

    Carevic, Dragana

    2005-05-01

    This paper describes a detection method that adapts to unknown characteristics of the underlying transient signal, such as location, length, and time-frequency content. It applies a set of embedded detectors tuned to a number of signal partitions. The detectors are based on the wavelet theory, whereby two different techniques are examined, one using local Fourier transform and the other using discrete wavelet transform. The detection statistics are computed so as to enable prewhitening of unknown colored noise and to allow for a constant false-alarm rate detection. An adapted segmentation of the signal is next obtained with a goal of finding the largest detection statistics within each segment of the partition. The detectors are tested using several underwater acoustic transient signals buried in ambient sea noise. PMID:15957761

  17. Solving Chemical Master Equations by an Adaptive Wavelet Method

    SciTech Connect

    Jahnke, Tobias; Galan, Steffen

    2008-09-01

    Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.

  18. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    NASA Astrophysics Data System (ADS)

    Ma, Xiang; Zabaras, Nicholas

    2009-03-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.

  19. Vibration suppression in cutting tools using collocated piezoelectric sensors/actuators with an adaptive control algorithm

    SciTech Connect

    Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T

    2008-01-01

    The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.

  20. Adaptive segmentation of wavelet transform coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Wasilewski, Piotr

    2000-04-01

    This paper presents video compression algorithm suitable for inexpensive real-time hardware implementation. This algorithm utilizes Discrete Wavelet Transform (DWT) with the new Adaptive Spatial Segmentation Algorithm (ASSA). The algorithm was designed to obtain better or similar decompressed video quality in compare to H.263 recommendation and MPEG standard using lower computational effort, especially at high compression rates. The algorithm was optimized for hardware implementation in low-cost Field Programmable Gate Array (FPGA) devices. The luminance and chrominance components of every frame are encoded with 3-level Wavelet Transform with biorthogonal filters bank. The low frequency subimage is encoded with an ADPCM algorithm. For the high frequency subimages the new Adaptive Spatial Segmentation Algorithm is applied. It divides images into rectangular blocks that may overlap each other. The width and height of the blocks are set independently. There are two kinds of blocks: Low Variance Blocks (LVB) and High Variance Blocks (HVB). The positions of the blocks and the values of the WT coefficients belonging to the HVB are encoded with the modified zero-tree algorithms. LVB are encoded with the mean value. Obtained results show that presented algorithm gives similar or better quality of decompressed images in compare to H.263, even up to 5 dB in PSNR measure.

  1. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661

  2. Classification of osteosarcoma T-ray responses using adaptive and rational wavelets for feature extraction

    NASA Astrophysics Data System (ADS)

    Ng, Desmond; Wong, Fu Tian; Withayachumnankul, Withawat; Findlay, David; Ferguson, Bradley; Abbott, Derek

    2007-12-01

    In this work we investigate new feature extraction algorithms on the T-ray response of normal human bone cells and human osteosarcoma cells. One of the most promising feature extraction methods is the Discrete Wavelet Transform (DWT). However, the classification accuracy is dependant on the specific wavelet base chosen. Adaptive wavelets circumvent this problem by gradually adapting to the signal to retain optimum discriminatory information, while removing redundant information. Using adaptive wavelets, classification accuracy, using a quadratic Bayesian classifier, of 96.88% is obtained based on 25 features. In addition, the potential of using rational wavelets rather than the standard dyadic wavelets in classification is explored. The advantage it has over dyadic wavelets is that it allows a better adaptation of the scale factor according to the signal. An accuracy of 91.15% is obtained through rational wavelets with 12 coefficients using a Support Vector Machine (SVM) as the classifier. These results highlight adaptive and rational wavelets as an efficient feature extraction method and the enormous potential of T-rays in cancer detection.

  3. Adaptive Wavelet-Based Direct Numerical Simulations of Rayleigh-Taylor Instability

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott J.

    The compressible Rayleigh-Taylor instability (RTI) occurs when a fluid of low molar mass supports a fluid of higher molar mass against a gravity-like body force or in the presence of an accelerating front. Intrinsic to the problem are highly stratified background states, acoustic waves, and a wide range of physical scales. The objective of this thesis is to develop a specialized computational framework that addresses these challenges and to apply the advanced methodologies for direct numerical simulations of compressible RTI. Simulations are performed using the Parallel Adaptive Wavelet Collocation Method (PAWCM). Due to the physics-based adaptivity and direct error control of the method, PAWCM is ideal for resolving the wide range of scales present in RTI growth. Characteristics-based non-reflecting boundary conditions are developed for highly stratified systems to be used in conjunction with PAWCM. This combination allows for extremely long domains, which is necessary for observing the late time growth of compressible RTI. Initial conditions that minimize acoustic disturbances are also developed. The initialization is consistent with linear stability theory, where the background state consists of two diffusively mixed stratified fluids of differing molar masses. The compressibility effects on the departure from the linear growth, the onset of strong non-linear interactions, and the late-time behavior of the fluid structures are investigated. It is discovered that, for the thermal equilibrium case, the background stratification acts to suppress the instability growth when the molar mass difference is small. A reversal in this monotonic behavior is observed for large molar mass differences, where stratification enhances the bubble growth. Stratification also affects the vortex creation and the associated induced velocities. The enhancement and suppression of the RTI growth has important consequences for a detailed understanding of supernovae flame front

  4. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  5. Fast Fourier and Wavelet Transforms for Wavefront Reconstruction in Adaptive Optics

    SciTech Connect

    Dowla, F U; Brase, J M; Olivier, S S

    2000-07-28

    Wavefront reconstruction techniques using the least-squares estimators are computationally quite expensive. We compare wavelet and Fourier transforms techniques in addressing the computation issues of wavefront reconstruction in adaptive optics. It is shown that because the Fourier approach is not simply a numerical approximation technique unlike the wavelet method, the Fourier approach might have advantages in terms of numerical accuracy. However, strictly from a numerical computations viewpoint, the wavelet approximation method might have advantage in terms of speed. To optimize the wavelet method, a statistical study might be necessary to use the best basis functions or ''approximation tree.''

  6. Morphology analysis of EKG R waves using wavelets with adaptive parameters derived from fuzzy logic

    NASA Astrophysics Data System (ADS)

    Caldwell, Max A.; Barrington, William W.; Miles, Richard R.

    1996-03-01

    Understanding of the EKG components P, QRS (R wave), and T is essential in recognizing cardiac disorders and arrhythmias. An estimation method is presented that models the R wave component of the EKG by adaptively computing wavelet parameters using fuzzy logic. The parameters are adaptively adjusted to minimize the difference between the original EKG waveform and the wavelet. The R wave estimate is derived from minimizing the combination of mean squared error (MSE), amplitude difference, spread difference, and shift difference. We show that the MSE in both non-noise and additive noise environment is less using an adaptive wavelet than a static wavelet. Research to date has focused on the R wave component of the EKG signal. Extensions of this method to model P and T waves are discussed.

  7. Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation

    NASA Astrophysics Data System (ADS)

    Lei, Sheau-Fang; Tung, Ying-Kai

    Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.

  8. Multiple Adaptations and Content-Adaptive FEC Using Parameterized RD Model for Embedded Wavelet Video

    NASA Astrophysics Data System (ADS)

    Yu, Ya-Huei; Ho, Chien-Peng; Tsai, Chun-Jen

    2007-12-01

    Scalable video coding (SVC) has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.

  9. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  10. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  11. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  12. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  13. Adaptive Redundant Lifting Wavelet Transform Based on Fitting for Fault Feature Extraction of Roller Bearings

    PubMed Central

    Yang, Zijing; Cai, Ligang; Gao, Lixin; Wang, Huaqing

    2012-01-01

    A least square method based on data fitting is proposed to construct a new lifting wavelet, together with the nonlinear idea and redundant algorithm, the adaptive redundant lifting transform based on fitting is firstly stated in this paper. By variable combination selections of basis function, sample number and dimension of basis function, a total of nine wavelets with different characteristics are constructed, which are respectively adopted to perform redundant lifting wavelet transforms on low-frequency approximate signals at each layer. Then the normalized lP norms of the new node-signal obtained through decomposition are calculated to adaptively determine the optimal wavelet for the decomposed approximate signal. Next, the original signal is taken for subsection power spectrum analysis to choose the node-signal for single branch reconstruction and demodulation. Experiment signals and engineering signals are respectively used to verify the above method and the results show that bearing faults can be diagnosed more effectively by the method presented here than by both spectrum analysis and demodulation analysis. Meanwhile, compared with the symmetrical wavelets constructed with Lagrange interpolation algorithm, the asymmetrical wavelets constructed based on data fitting are more suitable in feature extraction of fault signal of roller bearings. PMID:22666035

  14. Spatially adaptive Bayesian wavelet thresholding for speckle removal in medical ultrasound images

    NASA Astrophysics Data System (ADS)

    Hou, Jianhua; Xiong, Chengyi; Chen, Shaoping; He, Xiang

    2007-12-01

    In this paper, a novel spatially adaptive wavelet thresholding method based on Bayesian maximum a posteriori (MAP) criterion is proposed for speckle removal in medical ultrasound (US) images. The method firstly performs logarithmical transform to original speckled ultrasound image, followed by redundant wavelet transform. The proposed method uses the Rayleigh distribution for speckle wavelet coefficients and Laplacian distribution for modeling the statistics of wavelet coefficients due to signal. A Bayesian estimator with analytical formula is derived from MAP estimation, and the resulting formula is proven to be equivalent to soft thresholding in nature which makes the algorithm very simple. In order to exploit the correlation among wavelet coefficients, the parameters of Laplacian model are assumed to be spatially correlated and can be computed from the coefficients in a neighboring window, thus making our method spatially adaptive in wavelet domain. Theoretical analysis and simulation experiment results show that this proposed method can effectively suppress speckle noise in medical US images while preserving as much as possible important signal features and details.

  15. A wavelet approach to binary blackholes with asynchronous multitasking

    NASA Astrophysics Data System (ADS)

    Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.

  16. Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture

    NASA Astrophysics Data System (ADS)

    Anandkumar, Janavikulam; Szu, Harold H.

    1995-04-01

    This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.

  17. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  18. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  19. Mouse EEG spike detection based on the adapted continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.

    2016-04-01

    Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.

  20. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  1. Non-parametric transient classification using adaptive wavelets

    NASA Astrophysics Data System (ADS)

    Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.

    2015-11-01

    Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.

  2. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  3. Isotropic boundary adapted wavelets for coherent vorticity extraction in turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Farge, Marie; Sakurai, Teluo; Yoshimatsu, Katsunori; Schneider, Kai; Morishita, Koji; Ishihara, Takashi

    2015-11-01

    We present a construction of isotropic boundary adapted wavelets, which are orthogonal and yield a multi-resolution analysis. We analyze DNS data of turbulent channel flow computed at a friction-velocity based Reynolds number of 395 and investigate the role of coherent vorticity. Thresholding of the wavelet coefficients allows to split the flow into two parts, coherent and incoherent vorticity. The statistics of the former, i.e., energy and enstrophy spectra, are close to the ones of the total flow, and moreover the nonlinear energy budgets are well preserved. The remaining incoherent part, represented by the large majority of the weak wavelet coefficients, corresponds to a structureless, i.e., noise-like, background flow and exhibits an almost equi-distribution of energy.

  4. Adaptive inpainting algorithm based on DCT induced wavelet regularization.

    PubMed

    Li, Yan-Ran; Shen, Lixin; Suter, Bruce W

    2013-02-01

    In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting. PMID:23060331

  5. A mesh-adaptive collocation technique for the simulation of advection-dominated single- and multiphase transport phenomena in porous media

    SciTech Connect

    Koch, M.

    1995-12-31

    A new mesh-adaptive 1D collocation technique has been developed to efficiently solve transient advection-dominated transport problems in porous media that are governed by a hyperbolic/parabolic (singularly perturbed) PDE. After spatial discretization a singularly perturbed ODE is obtained which is solved by a modification of the COLNEW ODE-collocation code. The latter also contains an adaptive mesh procedure that has been enhanced here to resolve linear and nonlinear transport flow problems with steep fronts where regular FD and FE methods often fail. An implicit first-order backward Euler and a third-order Taylor-Donea technique are employed for the time integration. Numerical simulations on a variety of high Peclet-number transport phenomena as they occur in realistic porous media flow situations are presented. Examples include classical linear advection-diffusion, nonlinear adsorption, two-phase Buckley-Leverett flow without and with capillary forces (Rapoport-Leas equation) and Burgers` equation for inviscid fluid flow. In most of these examples sharp fronts and/or shocks develop which are resolved in an oscillation-free manner by the present adaptive collocation method. The backward Euler method has some amount of numerical dissipation is observed when the time-steps are too large. The third-order Taylor-Donea technique is less dissipative but is more prone to numerical oscillations. The simulations show that for the efficient solution of nonlinear singularly perturbed PDE`s governing flow transport a careful balance must be struck between the optimal mesh adaptation, the nonlinear iteration method and the time-stepping procedure. More theoretical research is needed with this regard.

  6. Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-guo; Wang, Ming-jia; Han, Guang-liang

    2011-08-01

    Being an efficient method of information fusion, image fusion has been used in many fields such as machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing, including segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then, based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover

  7. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  8. A new method for beam-damage-diagnosis using adaptive fuzzy neural structure and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Nguyen, Sy Dzung; Ngo, Kieu Nhi; Tran, Quang Thinh; Choi, Seung-Bok

    2013-08-01

    In this work, we present a new beam-damage-locating (BDL) method based on an algorithm which is a combination of an adaptive fuzzy neural structure (AFNS) and an average quantity solution to wavelet transform coefficient (AQWTC) of beam vibration signal. The AFNS is used for remembering undamaged-beam dynamic properties, while the AQWTC is used for signal analysis. Firstly, the beam is divided into elements and excited to be vibrated. Vibrating signal at each element, which is displacement in this work, is measured, filtered and transformed into wavelet signal with a used-scale-sheet to calculate the corresponding difference of AQWTC between two cases: undamaged status and the status at the checked time. Database about this difference is then used for finding out the elements having strange features in wavelet quantitative analysis, which directly represents the beam-damage signs. The effectiveness of the proposed approach which combines fuzzy neural structure and wavelet transform methods is demonstrated by experiment on measured data sets in a vibrated beam-type steel frame structure. `

  9. A new time-adaptive discrete bionic wavelet transform for enhancing speech from adverse noise environment

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui

    2012-04-01

    Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.

  10. An adaptive wavelet-based deblocking algorithm for MPEG-4 codec

    NASA Astrophysics Data System (ADS)

    Truong, Trieu-Kien; Chen, Shi-Huang; Jhang, Rong-Yi

    2005-08-01

    This paper proposed an adaptive wavelet-based deblocking algorithm for MPEG-4 video coding standard. The novelty of this method is that the deblocking filter uses a wavelet-based threshold to detect and analyze artifacts on coded block boundaries. This threshold value is based on the difference between the wavelet transform coefficients of image blocks and the coefficients of the entire image. Therefore, the threshold value is made adaptive to different images and characteristics of blocking artifacts. Then one can attenuate those artifacts by applying a selected filter based on the above threshold value. It is shown in this paper that the proposed method is robust, fast, and works remarkably well for MPEG-4 codec at low bit rates. Another advantage of the new method is that it retains sharp features in the decoded frames since it only removes artifacts. Experimental results show that the proposed method can achieve a significantly improved visual quality and increase the PSNR in the decoded video frame.

  11. Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.

    PubMed

    Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J

    2015-11-01

    Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems. PMID:25955990

  12. Automatic window size selection in Windowed Fourier Transform for 3D reconstruction using adapted mother wavelets

    NASA Astrophysics Data System (ADS)

    Fernandez, Sergio; Gdeisat, Munther A.; Salvi, Joaquim; Burton, David

    2011-06-01

    Fringe pattern analysis in coded structured light constitutes an active field of research. Techniques based on first projecting a sinusoidal pattern and then recovering the phase deviation permit the computation of the phase map and its corresponding depth map, leading to a dense acquisition of the measuring object. Among these techniques, the ones based on time-frequency analysis permit to extract the depth map from a single image, thus having potential applications measuring moving objects. The main techniques are Fourier Transform (FT), Windowed Fourier Transform (WFT) and Wavelet Transform (WT). This paper first analyzes the pros and cons of these three techniques, then a new algorithm for the automatic selection of the window size in WFT is proposed. This algorithm is compared to the traditional WT using adapted mother wavelet signals both with simulated and real objects, showing the performance results for quantitative and qualitative evaluations of the new method.

  13. Design of adaptive fuzzy wavelet neural sliding mode controller for uncertain nonlinear systems.

    PubMed

    Shahriari kahkeshi, Maryam; Sheikholeslam, Farid; Zekri, Maryam

    2013-05-01

    This paper proposes novel adaptive fuzzy wavelet neural sliding mode controller (AFWN-SMC) for a class of uncertain nonlinear systems. The main contribution of this paper is to design smooth sliding mode control (SMC) for a class of high-order nonlinear systems while the structure of the system is unknown and no prior knowledge about uncertainty is available. The proposed scheme composed of an Adaptive Fuzzy Wavelet Neural Controller (AFWNC) to construct equivalent control term and an Adaptive Proportional-Integral (A-PI) controller for implementing switching term to provide smooth control input. Asymptotical stability of the closed loop system is guaranteed, using the Lyapunov direct method. To show the efficiency of the proposed scheme, some numerical examples are provided. To validate the results obtained by proposed approach, some other methods are adopted from the literature and applied for comparison. Simulation results show superiority and capability of the proposed controller to improve the steady state performance and transient response specifications by using less numbers of fuzzy rules and on-line adaptive parameters in comparison to other methods. Furthermore, control effort has considerably decreased and chattering phenomenon has been completely removed. PMID:23453235

  14. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  15. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  16. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  17. Adaptive wavelet simulation of global ocean dynamics using a new Brinkman volume penalization

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-12-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one-dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  18. From wavelets to adaptive approximations: time-frequency parametrization of EEG.

    PubMed

    Durka, Piotr J

    2003-01-01

    This paper presents a summary of time-frequency analysis of the electrical activity of the brain (EEG). It covers in details two major steps: introduction of wavelets and adaptive approximations. Presented studies include time-frequency solutions to several standard research and clinical problems, encountered in analysis of evoked potentials, sleep EEG, epileptic activities, ERD/ERS and pharmaco-EEG. Based upon these results we conclude that the matching pursuit algorithm provides a unified parametrization of EEG, applicable in a variety of experimental and clinical setups. This conclusion is followed by a brief discussion of the current state of the mathematical and algorithmical aspects of adaptive time-frequency approximations of signals. PMID:12605721

  19. On application of fast and adaptive periodic Battle-Lemarie wavelets to modeling of multiple lossy transmission lines

    SciTech Connect

    Zhu, Xiaojun; Lei, Guangtsai; Pan, Guangwen

    1997-04-01

    In this paper, the continuous operator is discretized into matrix forms by Galerkin`s procedure, using periodic Battle-Lemarie wavelets as basis/testing functions. The polynomial decomposition of wavelets is applied to the evaluation of matrix elements, which makes the computational effort of the matrix elements no more expensive than that of method of moments (MoM) with conventional piecewise basis/testing functions. A new algorithm is developed employing the fast wavelet transform (FWT). Owing to localization, cancellation, and orthogonal properties of wavelets, very sparse matrices have been obtained, which are then solved by the LSQR iterative method. This algorithm is also adaptive in that one can add at will finer wavelet bases in the regions where fields vary rapidly, without any damage to the system orthogonality of the wavelet basis functions. To demonstrate the effectiveness of the new algorithm, we applied it to the evaluation of frequency-dependent resistance and inductance matrices of multiple lossy transmission lines. Numerical results agree with previously published data and laboratory measurements. The valid frequency range of the boundary integral equation results has been extended two to three decades in comparison with the traditional MoM approach. The new algorithm has been integrated into the computer aided design tool, MagiCAD, which is used for the design and simulation of high-speed digital systems and multichip modules Pan et al. 29 refs., 7 figs., 6 tabs.

  20. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    SciTech Connect

    Goffin, Mark A.; Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C.; Smith, Paul N.; Smedley-Stevenson, Richard P.

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.

  1. Removal of ocular artifacts from EEG using adaptive thresholding of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Krishnaveni, V.; Jayaraman, S.; Anitha, L.; Ramadoss, K.

    2006-12-01

    Electroencephalogram (EEG) gives researchers a non-invasive way to record cerebral activity. It is a valuable tool that helps clinicians to diagnose various neurological disorders and brain diseases. Blinking or moving the eyes produces large electrical potential around the eyes known as electrooculogram. It is a non-cortical activity which spreads across the scalp and contaminates the EEG recordings. These contaminating potentials are called ocular artifacts (OAs). Rejecting contaminated trials causes substantial data loss, and restricting eye movements/blinks limits the possible experimental designs and may affect the cognitive processes under investigation. In this paper, a nonlinear time-scale adaptive denoising system based on a wavelet shrinkage scheme has been used for removing OAs from EEG. The time-scale adaptive algorithm is based on Stein's unbiased risk estimate (SURE) and a soft-like thresholding function which searches for optimal thresholds using a gradient based adaptive algorithm is used. Denoising EEG with the proposed algorithm yields better results in terms of ocular artifact reduction and retention of background EEG activity compared to non-adaptive thresholding methods and the JADE algorithm.

  2. Powerline interference reduction in ECG signals using empirical wavelet transform and adaptive filtering.

    PubMed

    Singh, Omkar; Sunkaria, Ramesh Kumar

    2015-01-01

    Separating an information-bearing signal from the background noise is a general problem in signal processing. In a clinical environment during acquisition of an electrocardiogram (ECG) signal, The ECG signal is corrupted by various noise sources such as powerline interference (PLI), baseline wander and muscle artifacts. This paper presents novel methods for reduction of powerline interference in ECG signals using empirical wavelet transform (EWT) and adaptive filtering. The proposed methods are compared with the empirical mode decomposition (EMD) based PLI cancellation methods. A total of six methods for PLI reduction based on EMD and EWT are analysed and their results are presented in this paper. The EWT-based de-noising methods have less computational complexity and are more efficient as compared with the EMD-based de-noising methods. PMID:25412942

  3. Numerical Modeling of Global Atmospheric Chemical Transport with Wavelet-based Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2012-12-01

    In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as

  4. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  5. Computationally Efficient Locally Adaptive Demosaicing of Color Filter Array Images Using the Dual-Tree Complex Wavelet Packet Transform

    PubMed Central

    Aelterman, Jan; Goossens, Bart; De Vylder, Jonas; Pižurica, Aleksandra; Philips, Wilfried

    2013-01-01

    Most digital cameras use an array of alternating color filters to capture the varied colors in a scene with a single sensor chip. Reconstruction of a full color image from such a color mosaic is what constitutes demosaicing. In this paper, a technique is proposed that performs this demosaicing in a way that incurs a very low computational cost. This is done through a (dual-tree complex) wavelet interpretation of the demosaicing problem. By using a novel locally adaptive approach for demosaicing (complex) wavelet coefficients, we show that many of the common demosaicing artifacts can be avoided in an efficient way. Results demonstrate that the proposed method is competitive with respect to the current state of the art, but incurs a lower computational cost. The wavelet approach also allows for computationally effective denoising or deblurring approaches. PMID:23671575

  6. Incidental Learning of Collocation

    ERIC Educational Resources Information Center

    Webb, Stuart; Newton, Jonathan; Chang, Anna

    2013-01-01

    This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…

  7. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  8. Space-time adaptive approach to variational data assimilation using wavelets

    NASA Astrophysics Data System (ADS)

    Souopgui, Innocent; Wieland, Scott A.; Yousuff Hussaini, M.; Vasilyev, Oleg V.

    2016-02-01

    This paper focuses on one of the main challenges of 4-dimensional variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by considering the time in the same fashion as the space variables, reformulating the mathematical model in the entire space-time domain, and solving the problem on a near optimal computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed form of the solution eliminates the need to save or recompute data for every time slice as it is typically done in traditional time marching approaches to 4-dimensional variational data assimilation. The reduction of the required computational degrees of freedom is achieved using the compression properties of multi-dimensional second generation wavelets. The simultaneous space-time discretization of both the forward and the adjoint models makes it possible to solve both models either concurrently or sequentially. In addition, the grid adaptation reduces the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The proposed approach is demonstrated for the advection diffusion problem in two space-time dimensions.

  9. An economic prediction of refinement coefficients in wavelet-based adaptive methods for electron structure calculations.

    PubMed

    Pipek, János; Nagy, Szilvia

    2013-03-01

    The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109

  10. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    PubMed

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection. PMID:26811073

  11. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  12. Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network

    NASA Astrophysics Data System (ADS)

    Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao

    2009-10-01

    A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.

  13. Anatomically-adapted graph wavelets for improved group-level fMRI activation mapping.

    PubMed

    Behjat, Hamid; Leonardi, Nora; Sörnmo, Leif; Van De Ville, Dimitri

    2015-12-01

    A graph based framework for fMRI brain activation mapping is presented. The approach exploits the spectral graph wavelet transform (SGWT) for the purpose of defining an advanced multi-resolutional spatial transformation for fMRI data. The framework extends wavelet based SPM (WSPM), which is an alternative to the conventional approach of statistical parametric mapping (SPM), and is developed specifically for group-level analysis. We present a novel procedure for constructing brain graphs, with subgraphs that separately encode the structural connectivity of the cerebral and cerebellar gray matter (GM), and address the inter-subject GM variability by the use of template GM representations. Graph wavelets tailored to the convoluted boundaries of GM are then constructed as a means to implement a GM-based spatial transformation on fMRI data. The proposed approach is evaluated using real as well as semi-synthetic multi-subject data. Compared to SPM and WSPM using classical wavelets, the proposed approach shows superior type-I error control. The results on real data suggest a higher detection sensitivity as well as the capability to capture subtle, connected patterns of brain activity. PMID:26057594

  14. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system

    SciTech Connect

    Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre

    2008-08-10

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  15. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  16. A new method based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for recognition of urine cells from microscopic images independent of rotation and scaling.

    PubMed

    Avci, Derya; Leblebicioglu, Mehmet Kemal; Poyraz, Mustafa; Dogantekin, Esin

    2014-02-01

    So far, analysis and classification of urine cells number has become an important topic for medical diagnosis of some diseases. Therefore, in this study, we suggest a new technique based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for Recognition of Urine Cells from Microscopic Images Independent of Rotation and Scaling. Some digital image processing methods such as noise reduction, contrast enhancement, segmentation, and morphological process are used for feature extraction stage of this ADWEENN in this study. Nowadays, the image processing and pattern recognition topics have come into prominence. The image processing concludes operation and design of systems that recognize patterns in data sets. In the past years, very difficulty in classification of microscopic images was the deficiency of enough methods to characterize. Lately, it is seen that, multi-resolution image analysis methods such as Gabor filters, discrete wavelet decompositions are superior to other classic methods for analysis of these microscopic images. In this study, the structure of the ADWEENN method composes of four stages. These are preprocessing stage, feature extraction stage, classification stage and testing stage. The Discrete Wavelet Transform (DWT) and adaptive wavelet entropy and energy is used for adaptive feature extraction in feature extraction stage to strengthen the premium features of the Artificial Neural Network (ANN) classifier in this study. Efficiency of the developed ADWEENN method was tested showing that an avarage of 97.58% recognition succes was obtained. PMID:24493072

  17. The Assessment of Muscular Effort, Fatigue, and Physiological Adaptation Using EMG and Wavelet Analysis

    PubMed Central

    Graham, Ryan B.; Wachowiak, Mark P.; Gurd, Brendon J.

    2015-01-01

    Peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α) is a transcription factor co-activator that helps coordinate mitochondrial biogenesis within skeletal muscle following exercise. While evidence gleaned from submaximal exercise suggests that intracellular pathways associated with the activation of PGC-1α, as well as the expression of PGC-1α itself are activated to a greater extent following higher intensities of exercise, we have recently shown that this effect does not extend to supramaximal exercise, despite corresponding increases in muscle activation amplitude measured with electromyography (EMG). Spectral analyses of EMG data may provide a more in-depth assessment of changes in muscle electrophysiology occurring across different exercise intensities, and therefore the goal of the present study was to apply continuous wavelet transforms (CWTs) to our previous data to comprehensively evaluate: 1) differences in muscle electrophysiological properties at different exercise intensities (i.e. 73%, 100%, and 133% of peak aerobic power), and 2) muscular effort and fatigue across a single interval of exercise at each intensity, in an attempt to shed mechanistic insight into our previous observations that the increase in PGC-1α is dissociated from exercise intensity following supramaximal exercise. In general, the CWTs revealed that localized muscle fatigue was only greater than the 73% condition in the 133% exercise intensity condition, which directly matched the work rate results. Specifically, there were greater drop-offs in frequency, larger changes in burst power, as well as greater changes in burst area under this intensity, which were already observable during the first interval. As a whole, the results from the present study suggest that supramaximal exercise causes extreme localized muscular fatigue, and it is possible that the blunted PGC-1α effects observed in our previous study are the result of fatigue-associated increases in

  18. The Assessment of Muscular Effort, Fatigue, and Physiological Adaptation Using EMG and Wavelet Analysis.

    PubMed

    Graham, Ryan B; Wachowiak, Mark P; Gurd, Brendon J

    2015-01-01

    Peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α) is a transcription factor co-activator that helps coordinate mitochondrial biogenesis within skeletal muscle following exercise. While evidence gleaned from submaximal exercise suggests that intracellular pathways associated with the activation of PGC-1α, as well as the expression of PGC-1α itself are activated to a greater extent following higher intensities of exercise, we have recently shown that this effect does not extend to supramaximal exercise, despite corresponding increases in muscle activation amplitude measured with electromyography (EMG). Spectral analyses of EMG data may provide a more in-depth assessment of changes in muscle electrophysiology occurring across different exercise intensities, and therefore the goal of the present study was to apply continuous wavelet transforms (CWTs) to our previous data to comprehensively evaluate: 1) differences in muscle electrophysiological properties at different exercise intensities (i.e. 73%, 100%, and 133% of peak aerobic power), and 2) muscular effort and fatigue across a single interval of exercise at each intensity, in an attempt to shed mechanistic insight into our previous observations that the increase in PGC-1α is dissociated from exercise intensity following supramaximal exercise. In general, the CWTs revealed that localized muscle fatigue was only greater than the 73% condition in the 133% exercise intensity condition, which directly matched the work rate results. Specifically, there were greater drop-offs in frequency, larger changes in burst power, as well as greater changes in burst area under this intensity, which were already observable during the first interval. As a whole, the results from the present study suggest that supramaximal exercise causes extreme localized muscular fatigue, and it is possible that the blunted PGC-1α effects observed in our previous study are the result of fatigue-associated increases in

  19. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  20. Mono-component feature extraction for mechanical fault diagnosis using modified empirical wavelet transform via data-driven adaptive Fourier spectrum segment

    NASA Astrophysics Data System (ADS)

    Pan, Jun; Chen, Jinglong; Zi, Yanyang; Li, Yueming; He, Zhengjia

    2016-05-01

    Due to the multi-modulation feature in most of the vibration signals, the extraction of embedded fault information from condition monitoring data for mechanical fault diagnosis still is not a relaxed task. Despite the reported achievements, Wavelet transform follows the dyadic partition scheme and would not allow a data-driven frequency partition. And then Empirical Wavelet Transform (EWT) is used to extract inherent modulation information by decomposing signal into mono-components under an orthogonal basis and non-dyadic partition scheme. However, the pre-defined segment way of Fourier spectrum without dependence on analyzed signals may result in inaccurate mono-component identification. In this paper, the modified EWT (MEWT) method via data-driven adaptive Fourier spectrum segment is proposed for mechanical fault identification. First, inner product is calculated between the Fourier spectrum of analyzed signal and Gaussian function for scale representation. Then, adaptive spectrum segment is achieved by detecting local minima of the scale representation. Finally, empirical modes can be obtained by adaptively merging mono-components based on their envelope spectrum similarity. The adaptively extracted empirical modes are analyzed for mechanical fault identification. A simulation experiment and two application cases are used to verify the effectiveness of the proposed method and the results show its outstanding performance.

  1. Wavelets and electromagnetics

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.

    1992-01-01

    Wavelets are an exciting new topic in applied mathematics and signal processing. This paper will provide a brief review of wavelets which are also known as families of functions with an emphasis on interpretation rather than rigor. We will derive an indirect use of wavelets for the solution of integral equations based techniques adapted from image processing. Examples for resistive strips will be given illustrating the effect of these techniques as well as their promise in reducing dramatically the requirement in order to solve an integral equation for large bodies. We also will present a direct implementation of wavelets to solve an integral equation. Both methods suggest future research topics and may hold promise for a variety of uses in computational electromagnetics.

  2. Learning Collocations: Do the Number of Collocates, Position of the Node Word, and Synonymy Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2011-01-01

    This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…

  3. Interlanguage Development and Collocational Clash

    ERIC Educational Resources Information Center

    Shahheidaripour, Gholamabbass

    2000-01-01

    Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…

  4. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  5. Periodized wavelets

    SciTech Connect

    Schlossnagle, G.; Restrepo, J.M.; Leaf, G.K.

    1993-12-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and contrasted against their counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrate by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and several tabulated values are included.

  6. Wavelets based on Hermite cubic splines

    NASA Astrophysics Data System (ADS)

    Cvejnová, Daniela; Černá, Dana; Finěk, Václav

    2016-06-01

    In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.

  7. The use of wavelet transforms in the solution of two-phase flow problems

    SciTech Connect

    Moridis, G.J.; Nikolaou, M.; You, Yong

    1994-10-01

    In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts.

  8. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  9. Investigating ESL Learners' Lexical Collocations: The Acquisition of Verb + Noun Collocations by Japanese Learners of English

    ERIC Educational Resources Information Center

    Miyakoshi, Tomoko

    2009-01-01

    Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…

  10. Generalized orthogonal wavelet phase reconstruction.

    PubMed

    Axtell, Travis W; Cristi, Roberto

    2013-05-01

    Phase reconstruction is used for feedback control in adaptive optics systems. To achieve performance metrics for high actuator density or with limited processing capabilities on spacecraft, a wavelet signal processing technique is advantageous. Previous derivations of this technique have been limited to the Haar wavelet. This paper derives the relationship and algorithms to reconstruct phase with O(n) computational complexity for wavelets with the orthogonal property. This has additional benefits for performance with noise in the measurements. We also provide details on how to handle the boundary condition for telescope apertures. PMID:23695316

  11. Collocation and Technicality in EAP Engineering

    ERIC Educational Resources Information Center

    Ward, Jeremy

    2007-01-01

    This article explores how collocation relates to lexical technicality, and how the relationship can be exploited for teaching EAP to second-year engineering students. First, corpus data are presented to show that complex noun phrase formation is a ubiquitous feature of engineering text, and that these phrases (or collocations) are highly…

  12. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…

  13. Legendre wavelet operational matrix of fractional derivative through wavelet-polynomial transformation and its applications on non-linear system of fractional order differential equations

    NASA Astrophysics Data System (ADS)

    Isah, Abdulnasir; Chang, Phang

    2016-06-01

    In this article we propose the wavelet operational method based on shifted Legendre polynomial to obtain the numerical solutions of non-linear systems of fractional order differential equations (NSFDEs). The operational matrix of fractional derivative derived through wavelet-polynomial transformation are used together with the collocation method to turn the NSFDEs to a system of non-linear algebraic equations. Illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques.

  14. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  15. A Wavelet Perspective on the Allan Variance.

    PubMed

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757

  16. A multilevel stochastic collocation method for SPDEs

    SciTech Connect

    Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton

    2015-03-10

    We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.

  17. Integrated wavelets for medical image analysis

    NASA Astrophysics Data System (ADS)

    Heinlein, Peter; Schneider, Wilfried

    2003-11-01

    Integrated wavelets are a new method for discretizing the continuous wavelet transform (CWT). Independent of the choice of discrete scale and orientation parameters they yield tight families of convolution operators. Thus these families can easily be adapted to specific problems. After presenting the fundamental ideas, we focus primarily on the construction of directional integrated wavelets and their application to medical images. We state an exact algorithm for implementing this transform and present applications from the field of digital mammography. The first application covers the enhancement of microcalcifications in digital mammograms. Further, we exploit the directional information provided by integrated wavelets for better separation of microcalcifications from similar structures.

  18. The Effect of Input Enhancement of Collocations in Reading on Collocation Learning and Retention of EFL Learners

    ERIC Educational Resources Information Center

    Goudarzi, Zahra; Moini, M. Raouf

    2012-01-01

    Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…

  19. The Impact of Corpus-Based Collocation Instruction on Iranian EFL Learners' Collocation Learning

    ERIC Educational Resources Information Center

    Ashouri, Shabnam; Arjmandi, Masoume; Rahimi, Ramin

    2014-01-01

    Over the past decades, studies of EFL/ESL vocabulary acquisition have identified the significance of collocations in language learning. Due to the fact that collocations have been regarded as one of the major concerns of both EFL teachers and learners for many years, the present study attempts to shed light on the impact of corpus-based…

  20. Frequency of Input and L2 Collocational Processing: A Comparison of Congruent and Incongruent Collocations

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2013-01-01

    This study investigated the influence of frequency effects on the processing of congruent (i.e., having an equivalent first language [L1] construction) collocations and incongruent (i.e., not having an equivalent L1 construction) collocations in a second language (L2). An acceptability judgment task was administered to native and advanced…

  1. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Standards for physical collocation and virtual collocation. 51.323 Section 51.323 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Additional Obligations of Incumbent Local Exchange Carriers § 51.323 Standards for...

  2. 47 CFR 51.323 - Standards for physical collocation and virtual collocation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Standards for physical collocation and virtual collocation. 51.323 Section 51.323 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Additional Obligations of Incumbent Local Exchange Carriers § 51.323 Standards for...

  3. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  4. The use of wavelet transformations in the solution of two-phase flow problems

    SciTech Connect

    Moridis, G.J.; Nikolaou, M.; You, Y.

    1995-12-31

    In this paper the authors present the use of wavelets to solve the non-linear Partial Differential Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt change, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigation at nay spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. The authors determine that the Chui-Wang wavelets and a collection method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. The results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts.

  5. Detection of motor imagery of swallow EEG signals based on the dual-tree complex wavelet transform and adaptive model selection

    NASA Astrophysics Data System (ADS)

    Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai

    2014-06-01

    Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.

  6. Results of laser ranging collocations during 1983

    NASA Technical Reports Server (NTRS)

    Kolenkiewicz, R.

    1984-01-01

    The objective of laser ranging collocations is to compare the ability of two satellite laser ranging systems, located in the vicinity of one another, to measure the distance to an artificial Earth satellite in orbit over the sites. The similar measurement of this distance is essential before a new or modified laser system is deployed to worldwide locations in order to gather the data necessary to meet the scientific goals of the Crustal Dynamics Project. In order to be certain the laser systems are operating properly, they are periodically compared with each other. These comparisons or collocations are performed by locating the lasers side by side when they track the same satellite during the same time or pass. The data is then compared to make sure the lasers are giving essentially the same range results. Results of the three collocations performed during 1983 are given.

  7. Haar wavelet operational matrix method for solving constrained nonlinear quadratic optimal control problem

    NASA Astrophysics Data System (ADS)

    Swaidan, Waleeda; Hussin, Amran

    2015-10-01

    Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.

  8. Stochastic Collocation Method for Three-dimensional Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Shi, L.; Zhang, D.

    2008-12-01

    The stochastic collocation method (SCM) has recently gained extensive attention in several disciplines. The numerical implementation of SCM only requires repetitive runs of an existing deterministic solver or code as in the Monte Carlo simulation. But it is generally much more efficient than the Monte Carlo method. In this paper, the stochastic collocation method is used to efficiently qualify uncertainty of three-dimensional groundwater flow. We introduce the basic principles of common collocation methods, i.e., the tensor product collocation method (TPCM), Smolyak collocation method (SmCM), Stround-2 collocation method (StCM), and probability collocation method (PCM). Their accuracy, computational cost, and limitation are discussed. Illustrative examples reveal that the seamless combination of collocation techniques and existing simulators makes the new framework possible to efficiently handle complex stochastic problems.

  9. 3D steerable wavelets in practice.

    PubMed

    Chenouard, Nicolas; Unser, Michael

    2012-11-01

    We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems. PMID:22752138

  10. Gauging the Effects of Exercises on Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart

    2014-01-01

    Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…

  11. Corpus-Based versus Traditional Learning of Collocations

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2015-01-01

    One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…

  12. Is "Absorb Knowledge" an Improper Collocation?

    ERIC Educational Resources Information Center

    Su, Yujie

    2010-01-01

    Collocation is practically very tough to Chinese English learners. The main reason lies in the fact that English and Chinese belong to two distinct language systems. And the deep reason is that learners tend to develop different metaphorical concept in accordance with distinct ways of thinking in Chinese. The paper, taking "absorb…

  13. A Collocation Method for Volterra Integral Equations

    NASA Astrophysics Data System (ADS)

    Kolk, Marek

    2010-09-01

    We propose a piecewise polynomial collocation method for solving linear Volterra integral equations of the second kind with logarithmic kernels which, in addition to a diagonal singularity, may have a singularity at the initial point of the interval of integration. An attainable order of the convergence of the method is studied. We illustrate our results with a numerical example.

  14. Research of Gear Fault Detection in Morphological Wavelet Domain

    NASA Astrophysics Data System (ADS)

    Hong, Shi; Fang-jian, Shan; Bo, Cong; Wei, Qiu

    2016-02-01

    For extracting mutation information from gear fault signal and achieving a valid fault diagnosis, a gear fault diagnosis method based on morphological mean wavelet transform was designed. Morphological mean wavelet transform is a linear wavelet in the framework of morphological wavelet. Decomposing gear fault signal by this morphological mean wavelet transform could produce signal synthesis operators and detailed synthesis operators. For signal synthesis operators, it was just close to orginal signal, and for detailed synthesis operators, it contained fault impact signal or interference signal and could be catched. The simulation experiment result indicates that, compared with Fourier transform, the morphological mean wavelet transform method can do time-frequency analysis for original signal, effectively catch impact signal appears position; and compared with traditional linear wavelet transform, it has simple structure, easy realization, signal local extremum sensitivity and high denoising ability, so it is more adapted to gear fault real-time detection.

  15. Schwarz and multilevel methods for quadratic spline collocation

    SciTech Connect

    Christara, C.C.; Smith, B.

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  16. Directional spherical multipole wavelets

    SciTech Connect

    Hayn, Michael; Holschneider, Matthias

    2009-07-15

    We construct a family of admissible analysis reconstruction pairs of wavelet families on the sphere. The construction is an extension of the isotropic Poisson wavelets. Similar to those, the directional wavelets allow a finite expansion in terms of off-center multipoles. Unlike the isotropic case, the directional wavelets are not a tight frame. However, at small scales, they almost behave like a tight frame. We give an explicit formula for the pseudodifferential operator given by the combination analysis-synthesis with respect to these wavelets. The Euclidean limit is shown to exist and an explicit formula is given. This allows us to quantify the asymptotic angular resolution of the wavelets.

  17. Evaluating techniques for multivariate classification of non-collocated spatial data.

    SciTech Connect

    McKenna, Sean Andrew

    2004-09-01

    Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach

  18. Collocation method for fractional quantum mechanics

    SciTech Connect

    Amore, Paolo; Hofmann, Christoph P.; Saenz, Ricardo A.; Fernandez, Francisco M.

    2010-12-15

    We show that it is possible to obtain numerical solutions to quantum mechanical problems involving a fractional Laplacian, using a collocation approach based on little sinc functions, which discretizes the Schroedinger equation on a uniform grid. The different boundary conditions are naturally implemented using sets of functions with the appropriate behavior. Good convergence properties are observed. A comparison with results based on a Wentzel-Kramers-Brillouin analysis is performed.

  19. The Sea of Wavelets

    NASA Astrophysics Data System (ADS)

    Jones, B. J. T.

    Wavelet analysis has become a major tool in many aspects of data handling, whether it be statistical analysis, noise removal or image reconstruction. Wavelet analysis has worked its way into fields as diverse as economics, medicine, geophysics, music and cosmology.

  20. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  1. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  2. NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method

    NASA Astrophysics Data System (ADS)

    Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto

    2014-06-01

    The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.

  3. Data analysis using wavelets

    SciTech Connect

    Fryer, M.O.

    1997-05-01

    This paper describes the use of wavelet transform techniques to analyze typical data found in industrial applications. A way of detecting system changes using wavelet transforms is described. The results of applying this method are described for several typical applications. The wavelet technique is compared with the use of Fourier transform methods.

  4. Rotation and Scale Invariant Wavelet Feature for Content-Based Texture Image Retrieval.

    ERIC Educational Resources Information Center

    Lee, Moon-Chuen; Pun, Chi-Man

    2003-01-01

    Introduces a rotation and scale invariant log-polar wavelet texture feature for image retrieval. The underlying feature extraction process involves a log-polar transform followed by an adaptive row shift invariant wavelet packet transform. Experimental results show that this rotation and scale invariant wavelet feature is quite effective for image…

  5. Digital audio signal filtration based on the dual-tree wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, A. S.; Pavlov, A. N.

    2015-07-01

    A new method of digital audio signal filtration based on the dual-tree wavelet transform is described. An adaptive approach is proposed that allows the automatic adjustment of parameters of the wavelet filter to be optimized. A significant improvement of the quality of signal filtration is demonstrated in comparison to the traditionally used filters based on the discrete wavelet transform.

  6. Computational Complexity of Coherent Vortex and Adaptive Large Eddy Simulations of Three-Dimensional Homogeneous Turbulence at High Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Vasilyev, Oleg V.

    2011-11-01

    With the recent development of parallel adaptive wavelet collocation method, adaptive numerical simulations of high Reynolds number turbulent flows have become feasible. The integration of turbulence modeling of different fidelity with adaptive wavelet methods results in a hierarchical approach for modeling and simulating turbulent flows in which all or most energetic parts of coherent eddies are dynamically resolved on self-adaptive computational grids, while modeling the effect of the unresolved incoherent or less energetic modes. This talk is the first attempt to estimate how spatial modes of both Coherent Vortex Simulations (CVS) and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) scale with Reynolds number. The computational complexity studies for both CVS and SCALES of linearly forced homogeneous turbulence are performed at effective non-adaptive resolutions of 2563, 5123, 10243, and 20483 corresponding to approximate Reλ of 70, 120, 190, 320. The details of the simulations are discussed and the results of compression achieved by CVS and SCALES as well as scalability studies of the parallel algorithm for the aforementioned Taylor micro-scale Reynolds numbers are presented. This work was supported by NSF under grant No. CBET-0756046.

  7. Symplectic wavelet transformation.

    PubMed

    Fan, Hong-Yi; Lu, Hai-Liang

    2006-12-01

    Usually a wavelet transform is based on dilated-translated wavelets. We propose a symplectic-transformed-translated wavelet family psi(*)(r,s)(z-kappa) (r,s are the symplectic transform parameters, |s|(2)-|r|(2)=1, kappa is a translation parameter) generated from the mother wavelet psi and the corresponding wavelet transformation W(psi)f(r,s;kappa)=integral(infinity)(-infinity)(d(2)z/pi)f(z)psi(*)(r,s)(z-kappa). This new transform possesses well-behaved properties and is related to the optical Fresnel transform in quantum mechanical version. PMID:17099740

  8. Legendre Wavelet Operational Matrix of fractional Derivative through wavelet-polynomial transformation and its Applications in Solving Fractional Order Brusselator system

    NASA Astrophysics Data System (ADS)

    Chang, Phang; Isah, Abdulnasir

    2016-02-01

    In this paper we propose the wavelet operational method based on shifted Legendre polynomial to obtain the numerical solutions of nonlinear fractional-order chaotic system known by fractional-order Brusselator system. The operational matrices of fractional derivative and collocation method turn the nonlinear fractional-order Brusselator system to a system of algebraic equations. Two illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques.

  9. Analysis of chromatograph systems using orthogonal collocation

    NASA Technical Reports Server (NTRS)

    Woodrow, P. T.

    1974-01-01

    Research is generating fundamental engineering design techniques and concepts for the chromatographic separator of a chemical analysis system for an unmanned, Martian roving vehicle. A chromatograph model is developed which incorporates previously neglected transport mechanisms. The numerical technique of orthogonal collocation is studied. To establish the utility of the method, three models of increasing complexity are considered, the latter two being limiting cases of the derived model: (1) a simple, diffusion-convection model; (2) a rate of adsorption limited, inter-intraparticle model; and (3) an inter-intraparticle model with negligible mass transfer resistance.

  10. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    NASA Astrophysics Data System (ADS)

    Witteveen, Jeroen A. S.; Iaccarino, Gianluca

    2013-10-01

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC-SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers' equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC-SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  11. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    SciTech Connect

    Witteveen, Jeroen A.S.; Iaccarino, Gianluca

    2013-10-15

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC–SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers’ equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC–SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  12. Profiling the Collocation Use in ELT Textbooks and Learner Writing

    ERIC Educational Resources Information Center

    Tsai, Kuei-Ju

    2015-01-01

    The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…

  13. The Repetition of Collocations in EFL Textbooks: A Corpus Study

    ERIC Educational Resources Information Center

    Wang, Jui-hsin Teresa; Good, Robert L.

    2007-01-01

    The importance of repetition in the acquisition of lexical items has been widely acknowledged in single-word vocabulary research but has been relatively neglected in collocation studies. Since collocations are considered one key to achieving language fluency, and because learners spend a great amount of time interacting with their textbooks, the…

  14. The Effect of Grouping and Presenting Collocations on Retention

    ERIC Educational Resources Information Center

    Akpinar, Kadriye Dilek; Bardakçi, Mehmet

    2015-01-01

    The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…

  15. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    ERIC Educational Resources Information Center

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…

  16. New classes of Wavelets

    SciTech Connect

    Manchanda, P.; Meenakshi

    2009-07-02

    Recently Manchanda, Meenakshi and Siddiqi have studied Haar-Vilenkin wavelet and a special type of non-uniform multiresolution analysis. Haar-Vilenkin wavelet is a generalization of Haar wavelet. Motivated by the paper of Gabardo and Nashed we have introduced a class of multiresolution analysis extending the concept of classical multiresolution analysis. We present here a resume of these results. We hope that applications of these concepts to some significant real world problems could be found.

  17. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  18. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  19. Research on Medical Image Enhancement Algorithm Based on GSM Model for Wavelet Coefficients

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Jiang, Nian-de; Ning, Xing

    For the complexity and application diversity of medical CT image, this article presents a medical CT Image enhancing algorithm based on Gaussian Scale Mixture Model for wavelet coefficient in the study of wavelet multi-scale analysis. The noisy image is firstly denoised in auto-adapted Wiener filter. Secondly, through the qualitative analysis and classification of wavelet coefficients for the signal and noise, the wavelet's approximate distribution and statistical characteristics are described, combining GSM(Gaussian scale mixture) model for wavelet coefficient in this paper. It is shown that this algorithm can improve the denoised result and enhanced the medical CT image obviously.

  20. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  1. Source Wavelet Phase Extraction

    NASA Astrophysics Data System (ADS)

    Naghadeh, Diako Hariri; Morley, Christopher Keith

    2016-06-01

    Extraction of propagation wavelet phase from seismic data can be conducted using first, second, third and fourth-order statistics. Three new methods are introduced, which are: (1) Combination of different moments, (2) Windowed continuous wavelet transform and (3) Maximum correlation with cosine function. To compare different methods synthetic data with and without noise were chosen. Results show that first, second and third order statistics are not able to preserve wavelet phase. Kurtosis can preserve propagation wavelet phase but signal-to-noise ratio can affect the extracted phase using this method. So for data set with low signal-to-noise ratio, it will be unstable. Using a combination of different moments to extract the phase is more robust than applying kurtosis. The improvement occurs because zero phase wavelets with reverse polarities have equal maximum kurtosis values hence the correct wavelet polarity cannot be identified. Zero-phase wavelets with reverse polarities have minimum and maximum values for a combination of different-moments method. These properties enable the technique to handle a finite data segment and to choose the correct wavelet polarity. Also, the existence of different moments can decrease sensitivity to outliers. A windowed continuous wavelet transform is more sensitive to signal-to-noise ratio than the combination of different-moments method, also if the scale for the wavelet is incorrect it will encounter with more problems to extract phase. When the effects of frequency bandwidth, signal-to-noise ratio and analyzing window length are considered, the results of extracting phase information from data without and with noise demonstrate that combination of different-moments is superior to the other methods introduced here.

  2. Lifting wavelet method of target detection

    NASA Astrophysics Data System (ADS)

    Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin

    2009-11-01

    Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.

  3. Developing and Evaluating a Web-Based Collocation Retrieval Tool for EFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Hao-Jan Howard

    2011-01-01

    The development of adequate collocational knowledge is important for foreign language learners; nonetheless, learners often have difficulties in producing proper collocations in the target language. Among the various ways of learning collocations, the DDL (data-driven learning) approach encourages independent learning of collocations and allows…

  4. The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners

    ERIC Educational Resources Information Center

    Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd

    2011-01-01

    An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…

  5. Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting

    2016-01-01

    The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…

  6. The Learning Burden of Collocations: The Role of Interlexical and Intralexical Factors

    ERIC Educational Resources Information Center

    Peters, Elke

    2016-01-01

    This study investigates whether congruency (+/- literal translation equivalent), collocate-node relationship (adjective-noun, verb-noun, phrasal-verb-noun collocations), and word length influence the learning burden of EFL learners' learning collocations at the initial stage of form-meaning mapping. Eighteen collocations were selected on the basis…

  7. Usability Study of Two Collocated Prototype System Displays

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2007-01-01

    Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.

  8. Periodized Daubechies wavelets

    SciTech Connect

    Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.

    1996-03-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.

  9. EEG Artifact Removal Using a Wavelet Neural Network

    NASA Technical Reports Server (NTRS)

    Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    !n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.

  10. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.; Franco-Villoria, M.

    2014-06-01

    The quantification of measurement uncertainty of atmospheric parameters is a key factor in assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical contributions to the uncertainty budget is related to the collocation mismatch in space and time among observations made at different locations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or lidar. In this paper we propose a statistical modelling approach capable of explaining the relationship between collocation uncertainty and a set of environmental factors, height and distance between imperfectly collocated trajectories. The new statistical approach is based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows a natural definition of uncertainty profiles. Along this line, a five-fold decomposition of the total collocation uncertainty is proposed, giving both a profile budget and an integrated column budget. HFR is a data-driven approach valid for any atmospheric parameter, which can be assumed smooth. It is illustrated here by means of the collocation uncertainty analysis of relative humidity from two stations involved in the GCOS reference upper-air network (GRUAN). In this case, 85% of the total collocation uncertainty is ascribed to reducible environmental error, 11% to irreducible environmental error, 3.4% to adjustable bias, 0.1% to sampling error and 0.2% to measurement error.

  11. Localized dynamic kinetic-energy-based models for stochastic coherent adaptive large eddy simulation

    NASA Astrophysics Data System (ADS)

    De Stefano, Giuliano; Vasilyev, Oleg V.; Goldstein, Daniel E.

    2008-04-01

    Stochastic coherent adaptive large eddy simulation (SCALES) is an extension of the large eddy simulation approach in which a wavelet filter-based dynamic grid adaptation strategy is employed to solve for the most "energetic" coherent structures in a turbulent field while modeling the effect of the less energetic background flow. In order to take full advantage of the ability of the method in simulating complex flows, the use of localized subgrid-scale models is required. In this paper, new local dynamic one-equation subgrid-scale models based on both eddy-viscosity and non-eddy-viscosity assumptions are proposed for SCALES. The models involve the definition of an additional field variable that represents the kinetic energy associated with the unresolved motions. This way, the energy transfer between resolved and residual flow structures is explicitly taken into account by the modeling procedure without an equilibrium assumption, as in the classical Smagorinsky approach. The wavelet-filtered incompressible Navier-Stokes equations for the velocity field, along with the additional evolution equation for the subgrid-scale kinetic energy variable, are numerically solved by means of the dynamically adaptive wavelet collocation solver. The proposed models are tested for freely decaying homogeneous turbulence at Reλ=72. It is shown that the SCALES results, obtained with less than 0.5% of the total nonadaptive computational nodes, closely match reference data from direct numerical simulation. In contrast to classical large eddy simulation, where the energetic small scales are poorly simulated, the agreement holds not only in terms of global statistical quantities but also in terms of spectral distribution of energy and, more importantly, enstrophy all the way down to the dissipative scales.

  12. Entanglement Renormalization and Wavelets.

    PubMed

    Evenbly, Glen; White, Steven R

    2016-04-01

    We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems. PMID:27104687

  13. Entanglement Renormalization and Wavelets

    NASA Astrophysics Data System (ADS)

    Evenbly, Glen; White, Steven R.

    2016-04-01

    We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems.

  14. Lagrange wavelets for signal processing.

    PubMed

    Shi, Z; Wei, G W; Kouri, D J; Hoffman, D K; Bao, Z

    2001-01-01

    This paper deals with the design of interpolating wavelets based on a variety of Lagrange functions, combined with novel signal processing techniques for digital imaging. Halfband Lagrange wavelets, B-spline Lagrange wavelets and Gaussian Lagrange (Lagrange distributed approximating functional (DAF)) wavelets are presented as specific examples of the generalized Lagrange wavelets. Our approach combines the perceptually dependent visual group normalization (VGN) technique and a softer logic masking (SLM) method. These are utilized to rescale the wavelet coefficients, remove perceptual redundancy and obtain good visual performance for digital image processing. PMID:18255493

  15. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  16. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    SciTech Connect

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-11-20

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

  17. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  18. Wavelet analysis of atmospheric turbulence

    SciTech Connect

    Hudgins, L.H.

    1992-12-31

    After a brief review of the elementary properties of Fourier Transforms, the Wavelet Transform is defined in Part I. Basic results are given for admissable wavelets. The Multiresolution Analysis, or MRA (a mathematical structure which unifies a large class of wavelets with Quadrature Mirror Filters) is then introduced. Some fundamental aspects of wavelet design are then explored. The Discrete Wavelet Transform is discussed and, in the context of an MRA, is seen to supply a Fast Wavelet Transform which competes with the Fast Fourier Transform for efficiency. In Part II, the Wavelet Transform is developed in terms of the scale number variable s instead of the scale length variable a where a = 1/s. Basic results such as the admissibility condition, conservation of energy, and the reconstruction theorem are proven in this context. After reviewing some motivation for the usual Fourier power spectrum, a definition is given for the wavelet power spectrum. This `spectral density` is then intepreted in the context of spectral estimation theory. Parseval`s theorem for Wavelets then leads naturally to the Wavelet Cross Spectrum, Wavelet Cospectrum, and Wavelet Quadrature Spectrum. Wavelet Transforms are then applied in Part III to the analysis of atmospheric turbulence. Data collected over the ocean is examined in the wavelet transform domain for underlying structure. A brief overview of atmospheric turbulence is provided. Then the overall method of applying Wavelet Transform techniques to time series data is described. A trace study is included, showing some of the aspects of choosing the computational algorithm, and selection of a specific analyzing wavelet. A model for generating synthetic turbulence data is developed, and seen to yield useful results in comparing with real data for structural transitions. Results from the theory of Wavelet Spectral Estimation and Wavelength Cross-Transforms are applied to studying the momentum transport and the heat flux.

  19. Collocation and Pattern Recognition Effects on System Failure Remediation

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Press, Hayes N.

    2007-01-01

    Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.

  20. Multi-quadric collocation model of horizontal crustal movement

    NASA Astrophysics Data System (ADS)

    Chen, G.; Zeng, A. M.; Ming, F.; Jing, Y. F.

    2015-11-01

    To establish the horizontal crustal movement velocity field of the Chinese mainland, a Hardy multi-quadric fitting model and collocation are usually used, but the kernel function, nodes, and smoothing factor are difficult to determine in the Hardy function interpolation, and in the collocation model the covariance function of the stochastic signal must be carefully constructed. In this paper, a new combined estimation method for establishing the velocity field, based on collocation and multi-quadric equation interpolation, is presented. The crustal movement estimation simultaneously takes into consideration an Euler vector as the crustal movement trend and the local distortions as the stochastic signals, and a kernel function of the multi-quadric fitting model substitutes for the covariance function of collocation. The velocities of a set of 1070 reference stations were obtained from the Crustal Movement Observation Network of China (CMONOC), and the corresponding velocity field established using the new combined estimation method. A total of 85 reference stations were used as check points, and the precision in the north and east directions was 1.25 and 0.80 mm yr-1, respectively. The result obtained by the new method corresponds with the collocation method and multi-quadric interpolation without requiring the covariance equation for the signals.

  1. Wavelet Approach for Operational Gamma Spectral Peak Detection - Preliminary Assessment

    SciTech Connect

    ,

    2012-02-01

    Gamma spectroscopy for radionuclide identifications typically involves locating spectral peaks and matching the spectral peaks with known nuclides in the knowledge base or database. Wavelet analysis, due to its ability for fitting localized features, offers the potential for automatic detection of spectral peaks. Past studies of wavelet technologies for gamma spectra analysis essentially focused on direct fitting of raw gamma spectra. Although most of those studies demonstrated the potentials of peak detection using wavelets, they often failed to produce new benefits to operational adaptations for radiological surveys. This work presents a different approach with the operational objective being to detect only the nuclides that do not exist in the environment (anomalous nuclides). With this operational objective, the raw-count spectrum collected by a detector is first converted to a count-rate spectrum and is then followed by background subtraction prior to wavelet analysis. The experimental results suggest that this preprocess is independent of detector type and background radiation, and is capable of improving the peak detection rates using wavelets. This process broadens the doors for a practical adaptation of wavelet technologies for gamma spectral surveying devices.

  2. Three-dimensional compression scheme based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Yang, Wu; Xu, Hui; Liao, Mengyang

    1999-03-01

    In this paper, a 3D compression method based on separable wavelet transform is discussed in detail. The most commonly used digital modalities generate multiple slices in a single examination, which are normally anatomically or physiologically correlated to each other. 3D wavelet compression methods can achieve more efficient compression by exploring the correlation between slices. The first step is based on a separable 3D wavelet transform. Considering the difference between pixel distances within a slice and those between slices, one biorthogonal Antoninin filter bank is applied within 2D slices and a second biorthogonal Villa4 filter bank on the slice direction. Then, S+P transform is applied in the low-resolution wavelet components and an optimal quantizer is presented after analysis of the quantization noise. We use an optimal bit allocation algorithm, which, instead of eliminating the coefficients of high-resolution components in smooth areas, minimizes the system reconstruction distortion at a given bit-rate. Finally, to remain high coding efficiency and adapt to different properties of each component, a comprehensive entropy coding method is proposed, in which arithmetic coding method is applied in high-resolution components and adaptive Huffman coding method in low-resolution components. Our experimental results are evaluated by several image measures and our 3D wavelet compression scheme is proved to be more efficient than 2D wavelet compression.

  3. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown. PMID:19674951

  4. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.

    2013-08-01

    The uncertainty of important atmospheric parameters is a key factor for assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical points of the uncertainty budget is related to the collocation mismatch in space and time among different observations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or LIDAR. In this paper we consider a statistical modelling approach to understand at which extent collocation uncertainty is related to environmental factors, height and distance between the trajectories. To do this we introduce a new statistical approach, based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows us a natural definition of uncertainty profiles. Moreover, using this modelling approach, a five-folded uncertainty decomposition is proposed. Eventually, the HFR approach is illustrated by the collocation uncertainty analysis of relative humidity from two stations involved in GCOS reference upper-air network (GRUAN).

  5. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  6. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  7. Wavelets on Planar Tesselations

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-02-25

    We present a new technique for progressive approximation and compression of polygonal objects in images. Our technique uses local parameterizations defined by meshes of convex polygons in the plane. We generalize a tensor product wavelet transform to polygonal domains to perform multiresolution analysis and compression of image regions. The advantage of our technique over conventional wavelet methods is that the domain is an arbitrary tessellation rather than, for example, a uniform rectilinear grid. We expect that this technique has many applications image compression, progressive transmission, radiosity, virtual reality, and image morphing.

  8. Electromagnetic spatial coherence wavelets.

    PubMed

    Castaneda, Roman; Garcia-Sucerquia, Jorge

    2006-01-01

    The recently introduced concept of spatial coherence wavelets is generalized to describe the propagation of electromagnetic fields in the free space. For this aim, the spatial coherence wavelet tensor is introduced as an elementary amount, in terms of which the formerly known quantities for this domain can be expressed. It allows for the analysis of the relationship between the spatial coherence properties and the polarization state of the electromagnetic wave. This approach is completely consistent with the recently introduced unified theory of coherence and polarization for random electromagnetic beams, but it provides further insight about the causal relationship between the polarization states at different planes along the propagation path. PMID:16478063

  9. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  10. Minimal multi-element stochastic collocation for uncertainty quantification of discontinuous functions

    SciTech Connect

    Jakeman, John D.; Narayan, Akil; Xiu, Dongbin

    2013-06-01

    We propose a multi-element stochastic collocation method that can be applied in high-dimensional parameter space for functions with discontinuities lying along manifolds of general geometries. The key feature of the method is that the parameter space is decomposed into multiple elements defined by the discontinuities and thus only the minimal number of elements are utilized. On each of the resulting elements the function is smooth and can be approximated using high-order methods with fast convergence properties. The decomposition strategy is in direct contrast to the traditional multi-element approaches which define the sub-domains by repeated splitting of the axes in the parameter space. Such methods are more prone to the curse-of-dimensionality because of the fast growth of the number of elements caused by the axis based splitting. The present method is a two-step approach. Firstly a discontinuity detector is used to partition parameter space into disjoint elements in each of which the function is smooth. The detector uses an efficient combination of the high-order polynomial annihilation technique along with adaptive sparse grids, and this allows resolution of general discontinuities with a smaller number of points when the discontinuity manifold is low-dimensional. After partitioning, an adaptive technique based on the least orthogonal interpolant is used to construct a generalized Polynomial Chaos surrogate on each element. The adaptive technique reuses all information from the partitioning and is variance-suppressing. We present numerous numerical examples that illustrate the accuracy, efficiency, and generality of the method. When compared against standard locally-adaptive sparse grid methods, the present method uses many fewer number of collocation samples and is more accurate.

  11. Collocational Strategies of Arab Learners of English: A Study in Lexical Semantics.

    ERIC Educational Resources Information Center

    Muhammad, Raji Zughoul; Abdul-Fattah, Hussein S.

    Arab learners of English encounter a serious problem with collocational sequences. The present study purports to determine the extent to which university English language majors can use English collocations properly. A two-form translation test of 16 Arabic collocations was administered to both graduate and undergraduate students of English. The…

  12. L2 Learner Production and Processing of Collocation: A Multi-Study Perspective

    ERIC Educational Resources Information Center

    Siyanova, Anna; Schmitt, Norbert

    2008-01-01

    This article presents a series of studies focusing on L2 production and processing of adjective-noun collocations (e.g., "social services"). In Study 1, 810 adjective-noun collocations were extracted from 31 essays written by Russian learners of English. About half of these collocations appeared frequently in the British National Corpus (BNC);…

  13. An Exploratory Study of Collocational Use by ESL Students--A Task Based Approach

    ERIC Educational Resources Information Center

    Fan, May

    2009-01-01

    Collocation is an aspect of language generally considered arbitrary by nature and problematic to L2 learners who need collocational competence for effective communication. This study attempts, from the perspective of L2 learners, to have a deeper understanding of collocational use and some of the problems involved, by adopting a task based…

  14. Redefining Creativity--Analyzing Definitions, Collocations, and Consequences

    ERIC Educational Resources Information Center

    Kampylis, Panagiotis G.; Valtanen, Juri

    2010-01-01

    How holistically is human creativity defined, investigated, and understood? Until recently, most scientific research on creativity has focused on its positive side. However, creativity might not only be a desirable resource but also be a potential threat. In order to redefine creativity we need to analyze and understand definitions, collocations,…

  15. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  16. Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation

    SciTech Connect

    Ismail, M. S.

    2010-09-30

    The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.

  17. Recent advances in (soil moisture) triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....

  18. Beyond triple collocation: Applications to satellite soil moisture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  19. The Effects of Vocabulary Learning on Collocation and Meaning

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2009-01-01

    This study investigates the effects of receptive and productive vocabulary tasks on learning collocation and meaning. Japanese English as a foreign language students learned target words in three glossed sentences and in a cloze task. To determine the effects of the treatments, four tests were used to measure receptive and productive knowledge of…

  20. Evaluation of assumptions in soil moisture triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  1. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…

  2. Real-time defect detection of steel wire rods using wavelet filters optimized by univariate dynamic encoding algorithm for searches.

    PubMed

    Yun, Jong Pil; Jeon, Yong-Ju; Choi, Doo-chul; Kim, Sang Woo

    2012-05-01

    We propose a new defect detection algorithm for scale-covered steel wire rods. The algorithm incorporates an adaptive wavelet filter that is designed on the basis of lattice parameterization of orthogonal wavelet bases. This approach offers the opportunity to design orthogonal wavelet filters via optimization methods. To improve the performance and the flexibility of wavelet design, we propose the use of the undecimated discrete wavelet transform, and separate design of column and row wavelet filters but with a common cost function. The coefficients of the wavelet filters are optimized by the so-called univariate dynamic encoding algorithm for searches (uDEAS), which searches the minimum value of a cost function designed to maximize the energy difference between defects and background noise. Moreover, for improved detection accuracy, we propose an enhanced double-threshold method. Experimental results for steel wire rod surface images obtained from actual steel production lines show that the proposed algorithm is effective. PMID:22561939

  3. L1 Influence on the Acquisition of L2 Collocations: Japanese ESL Users and EFL Learners Acquiring English Collocations

    ERIC Educational Resources Information Center

    Yamashita, Junko; Jiang, Nan

    2010-01-01

    This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese…

  4. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  5. Discrete wavelet analysis of power system transients

    SciTech Connect

    Wilkinson, W.A.; Cox, M.D.

    1996-11-01

    Wavelet analysis is a new method for studying power system transients. Through wavelet analysis, transients are decomposed into a series of wavelet components, each of which is a time-domain signal that covers a specific octave frequency band. This paper presents the basic ideas of discrete wavelet analysis. A variety of actual and simulated transient signals are then analyzed using the discrete wavelet transform that help demonstrate the power of wavelet analysis.

  6. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  7. Wavelets in medical imaging

    SciTech Connect

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  8. Wavelets in medical imaging

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  9. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  10. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-01-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  11. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-11-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  12. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  13. Collocation methods for distillation design. 2: Applications for distillation

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    The authors present applications for a collocation method for modeling distillation columns that they developed in a companion paper. They discuss implementation of the model, including discussion of the ASCEND (Advanced System for Computations in ENgineering Design) system, which enables one to create complex models with simple building blocks and interactively learn to solve them. They first investigate applying the model to compute minimum reflux for a given separation task, exactly solving nonsharp and approximately solving sharp split minimum reflux problems. They next illustrate the use of the collocation model to optimize the design a single column capable of carrying out a prescribed set of separation tasks. The optimization picks the best column diameter and total number of trays. It also picks the feed tray for each of the prescribed separations.

  14. Collocation and Least Residuals Method and Its Applications

    NASA Astrophysics Data System (ADS)

    Shapeev, Vasily

    2016-02-01

    The collocation and least residuals (CLR) method combines the methods of collocations (CM) and least residuals. Unlike the CM, in the CLR method an approximate solution of the problem is found from an overdetermined system of linear algebraic equations (SLAE). The solution of this system is sought under the requirement of minimizing a functional involving the residuals of all its equations. On the one hand, this added complication of the numerical algorithm expands the capabilities of the CM for solving boundary value problems with singularities. On the other hand, the CLR method inherits to a considerable extent some convenient features of the CM. In the present paper, the CLR capabilities are illustrated on benchmark problems for 2D and 3D Navier-Stokes equations, the modeling of the laser welding of metal plates of similar and different metals, problems investigating strength of loaded parts made of composite materials, boundary-value problems for hyperbolic equations.

  15. Radiation energy budget studies using collocated AVHRR and ERBE observations

    SciTech Connect

    Ackerman, S.A.; Inoue, Toshiro

    1994-03-01

    Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert. 25 refs., 8 figs.

  16. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V., II

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  17. Market turning points forecasting using wavelet analysis

    NASA Astrophysics Data System (ADS)

    Bai, Limiao; Yan, Sen; Zheng, Xiaolian; Chen, Ben M.

    2015-11-01

    Based on the system adaptation framework we previously proposed, a frequency domain based model is developed in this paper to forecast the major turning points of stock markets. This system adaptation framework has its internal model and adaptive filter to capture the slow and fast dynamics of the market, respectively. The residue of the internal model is found to contain rich information about the market cycles. In order to extract and restore its informative frequency components, we use wavelet multi-resolution analysis with time-varying parameters to decompose this internal residue. An empirical index is then proposed based on the recovered signals to forecast the market turning points. This index is successfully applied to US, UK and China markets, where all major turning points are well forecasted.

  18. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  19. Multiscale quantum propagation using compact-support wavelets in space and time

    SciTech Connect

    Wang Haixiang; Acevedo, Ramiro; Molle, Heather; Mackey, Jeffrey L.; Kinsey, James L.; Johnson, Bruce R.

    2004-10-22

    Orthogonal compact-support Daubechies wavelets are employed as bases for both space and time variables in the solution of the time-dependent Schroedinger equation. Initial value conditions are enforced using special early-time wavelets analogous to edge wavelets used in boundary-value problems. It is shown that the quantum equations may be solved directly and accurately in the discrete wavelet representation, an important finding for the eventual goal of highly adaptive multiresolution Schroedinger equation solvers. While the temporal part of the basis is not sharp in either time or frequency, the Chebyshev method used for pure time-domain propagations is adapted to use in the mixed domain and is able to take advantage of Hamiltonian matrix sparseness. The orthogonal separation into different time scales is determined theoretically to persist throughout the evolution and is demonstrated numerically in a partially adaptive treatment of scattering from an asymmetric Eckart barrier.

  20. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  1. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  2. A Corpus-Based Study of the Linguistic Features and Processes Which Influence the Way Collocations Are Formed: Some Implications for the Learning of Collocations

    ERIC Educational Resources Information Center

    Walker, Crayton Phillip

    2011-01-01

    In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining…

  3. Finite element-wavelet hybrid algorithm for atmospheric tomography.

    PubMed

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2014-03-01

    Reconstruction of the refractive index fluctuations in the atmosphere, or atmospheric tomography, is an underlying problem of many next generation adaptive optics (AO) systems, such as the multiconjugate adaptive optics or multiobject adaptive optics (MOAO). The dimension of the problem for the extremely large telescopes, such as the European Extremely Large Telescope (E-ELT), suggests the use of iterative schemes as an alternative to the matrix-vector multiply (MVM) methods. Recently, an algorithm based on the wavelet representation of the turbulence has been introduced in [Inverse Probl.29, 085003 (2013)] by the authors to solve the atmospheric tomography using the conjugate gradient iteration. The authors also developed an efficient frequency-dependent preconditioner for the wavelet method in a later work. In this paper we study the computational aspects of the wavelet algorithm. We introduce three new techniques, the dual domain discretization strategy, a scale-dependent preconditioner, and a ground layer multiscale method, to derive a method that is globally O(n), parallelizable, and compact with respect to memory. We present the computational cost estimates and compare the theoretical numerical performance of the resulting finite element-wavelet hybrid algorithm with the MVM. The quality of the method is evaluated in terms of an MOAO simulation for the E-ELT on the European Southern Observatory (ESO) end-to-end simulation system OCTOPUS. The method is compared to the ESO version of the Fractal Iterative Method [Proc. SPIE7736, 77360X (2010)] in terms of quality. PMID:24690653

  4. Data compression by wavelet transforms

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1992-01-01

    A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.

  5. Wavelet transform based on the optimal wavelet pairs for tunable diode laser absorption spectroscopy signal processing.

    PubMed

    Li, Jingsong; Yu, Benli; Fischer, Horst

    2015-04-01

    This paper presents a novel methodology-based discrete wavelet transform (DWT) and the choice of the optimal wavelet pairs to adaptively process tunable diode laser absorption spectroscopy (TDLAS) spectra for quantitative analysis, such as molecular spectroscopy and trace gas detection. The proposed methodology aims to construct an optimal calibration model for a TDLAS spectrum, regardless of its background structural characteristics, thus facilitating the application of TDLAS as a powerful tool for analytical chemistry. The performance of the proposed method is verified using analysis of both synthetic and observed signals, characterized with different noise levels and baseline drift. In terms of fitting precision and signal-to-noise ratio, both have been improved significantly using the proposed method. PMID:25741689

  6. Spectral Laplace-Beltrami wavelets with applications in medical images.

    PubMed

    Tan, Mingzhen; Qiu, Anqi

    2015-05-01

    The spectral graph wavelet transform (SGWT) has recently been developed to compute wavelet transforms of functions defined on non-Euclidean spaces such as graphs. By capitalizing on the established framework of the SGWT, we adopt a fast and efficient computation of a discretized Laplace-Beltrami (LB) operator that allows its extension from arbitrary graphs to differentiable and closed 2-D manifolds (smooth surfaces embedded in the 3-D Euclidean space). This particular class of manifolds are widely used in bioimaging to characterize the morphology of cells, tissues, and organs. They are often discretized into triangular meshes, providing additional geometric information apart from simple nodes and weighted connections in graphs. In comparison with the SGWT, the wavelet bases constructed with the LB operator are spatially localized with a more uniform "spread" with respect to underlying curvature of the surface. In our experiments, we first use synthetic data to show that traditional applications of wavelets in smoothing and edge detectio can be done using the wavelet bases constructed with the LB operator. Second, we show that multi-resolutional capabilities of the proposed framework are applicable in the classification of Alzheimer's patients with normal subjects using hippocampal shapes. Wavelet transforms of the hippocampal shape deformations at finer resolutions registered higher sensitivity (96%) and specificity (90%) than the classification results obtained from the direct usage of hippocampal shape deformations. In addition, the Laplace-Beltrami method requires consistently a smaller number of principal components (to retain a fixed variance) at higher resolution as compared to the binary and weighted graph Laplacians, demonstrating the potential of the wavelet bases in adapting to the geometry of the underlying manifold. PMID:25343758

  7. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  8. Wavelet compression of medical imagery.

    PubMed

    Reiter, E

    1996-01-01

    Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355

  9. A generalized wavelet extrema representation

    SciTech Connect

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  10. Using wavelets to solve the Burgers equation: A comparative study

    SciTech Connect

    Schult, R.L.; Wyld, H.W. )

    1992-12-15

    The Burgers equation is solved for Reynolds numbers [approx lt]8000 in a representation using coarse-scale scaling functions and a subset of the wavelets at finer scales of resolution. Situations are studied in which the solution develops a shocklike discontinuity. Extra wavelets are kept for several levels of higher resolution in the neighborhood of this discontinuity. Algorithms are presented for the calculation of matrix elements of first- and second-derivative operators and a useful product operation in this truncated wavelet basis. The time evolution of the system is followed using an implicit time-stepping computer code. An adaptive algorithm is presented which allows the code to follow a moving shock front in a system with periodic boundary conditions.

  11. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  12. Mother wavelets for complex wavelet transform derived by Einstein-Podolsky-Rosen entangled state representation.

    PubMed

    Fan, Hong-Yi; Lu, Hai-Liang

    2007-03-01

    The Einstein-Podolsky-Rosen entangled state representation is applied to studying the admissibility condition of mother wavelets for complex wavelet transforms, which leads to a family of new mother wavelets. Mother wavelets thus are classified as the Hermite-Gaussian type for real wavelet transforms and the Laguerre-Gaussian type for the complex case. PMID:17392919

  13. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  14. Wavelets and spacetime squeeze

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1993-01-01

    It is shown that the wavelet is the natural language for the Lorentz covariant description of localized light waves. A model for covariant superposition is constructed for light waves with different frequencies. It is therefore possible to construct a wave function for light waves carrying a covariant probability interpretation. It is shown that the time-energy uncertainty relation (Delta(t))(Delta(w)) is approximately 1 for light waves is a Lorentz-invariant relation. The connection between photons and localized light waves is examined critically.

  15. An Introduction to Wavelet Theory and Analysis

    SciTech Connect

    Miner, N.E.

    1998-10-01

    This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.

  16. An optimal wavelet for the detection of surface waves in Marine Sediments

    NASA Astrophysics Data System (ADS)

    Kritski, A.; Vincent, A. P.; Yuen, D. A.

    2004-12-01

    We study seismic surface wave propagation in stratified shallow marine sediments media. Our goal is to predict dynamic (shear velocity, attenuation) and physical properties (stiffness, density) of sediments from seismoacoustic records of surface waves propagating along the water-seabed interface. To estimate and invert propagational parameters of surface waves (group and phase velocity) into shear velocity as a function of distance and depth we are using a multiscale wavelet cross-correlation technique. Standard wavelet transform series has indeed proven very useful for imaging different surface waves modes. However, to achieve a better resolution of each mode imaging we need to develop a new wavelet transform that includes optimality and adaptivity, based on the seismic data itself. Our main tool to develop such an optimal wavelet is the Karhunen-Loeve decomposition of the data series. This requires two steps: first, we calculate set of covariance matrices from the pairs of time series. Second, we estimate the corresponding eigenvalues and eigenfunctions. The calculated eigenfunctions have to be further regularized to obtain a new wavelet series. This new eigenfunctions basis has an optimal convergence in the sense of the least squares. It is sufficient to take a small number of the above set of eigenfunctions. They are naturally adapted to surface waves modes propagation in terms of scales values: time and periods (frequencies). Our approach makes it possible to decompose highly correlated reference data series into eigenvectors and then to use it to decompose field data records in the frequency and time domains with significant improvement of the image quality. We have processed different seismic records with surface waves. The results were compared with the wavelet analysis using standard wavelet kernel ('Morlet', 'Gaussian', 'Mexican hat'). We show that our new developed adaptive wavelet discriminates better between different surface wave modes propagating

  17. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    SciTech Connect

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  18. Wavelet networks for face processing

    NASA Astrophysics Data System (ADS)

    Krüger, V.; Sommer, G.

    2002-06-01

    Wavelet networks (WNs) were introduced in 1992 as a combination of artificial neural radial basis function (RBF) networks and wavelet decomposition. Since then, however, WNs have received only a little attention. We believe that the potential of WNs has been generally underestimated. WNs have the advantage that the wavelet coefficients are directly related to the image data through the wavelet transform. In addition, the parameters of the wavelets in the WNs are subject to optimization, which results in a direct relation between the represented function and the optimized wavelets, leading to considerable data reduction (thus making subsequent algorithms much more efficient) as well as to wavelets that can be used as an optimized filter bank. In our study we analyze some WN properties and highlight their advantages for object representation purposes. We then present a series of results of experiments in which we used WNs for face tracking. We exploit the efficiency that is due to data reduction for face recognition and face-pose estimation by applying the optimized-filter-bank principle of the WNs.

  19. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  20. Collocation method for chatter avoidance of general turning operations

    NASA Astrophysics Data System (ADS)

    Urbicain, G.; Olvera, D.; Fernández, A.; Rodríguez, A.; López de Lacalle, L. N.

    2012-04-01

    An accurate prediction of the dynamic stability of a cutting system involves the implementation of tool geometry and cutting conditions on any model used for such purpose. This study presents a dynamic cutting force model based on the collocation method by Chebyshev polynomials taking advantage from its ability to consider tool geometry and cutting parameters. In the paper, a simple 1DOF model is used to forecast chatter vibrations due to the workpiece and tool, which are distinguished in separate sections. The proposed model is verified positively against experimental dynamic tests.

  1. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  2. Why are wavelets so effective

    SciTech Connect

    Resnikoff, H.L. )

    1993-01-01

    The theory of compactly supported wavelets is now 4 yr old. In that short period, it has stimulated significant research in pure mathematics; has been the source of new numerical methods for the solution of nonlinear partial differential equations, including Navier-Stokes; and has been applied to digital signal-processing problems, ranging from signal detection and classification to signal compression for speech, audio, images, seismic signals, and sonar. Wavelet channel coding has even been proposed for code division multiple access digital telephony. In each of these applications, prototype wavelet solutions have proved to be competitive with established methods, and in many cases they are already superior.

  3. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  4. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  5. Collocating satellite-based radar and radiometer measurements - methodology and usage examples.

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-05-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat. First, a simple method is presented to obtain those collocations. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to its centrepoint. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relationship between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations are available for public use.

  6. Collocating satellite-based radar and radiometer measurements - methodology and usage examples

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-02-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat CPR. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.

  7. Collocating satellite-based radar and radiometer measurements - methodology and usage examples

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-06-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the Cloud Profiling Radar (CPR) on-board CloudSat. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.

  8. Birdsong Denoising Using Wavelets.

    PubMed

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  9. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  10. Wavelet theory and its applications

    SciTech Connect

    Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.

  11. A wavelet phase filter for emission tomography

    SciTech Connect

    Olsen, E.T.; Lin, B.

    1995-07-01

    The presence of a high level of noise is a characteristic in some tomographic imaging techniques such as positron emission tomography (PET). Wavelet methods can smooth out noise while preserving significant features of images. Mallat et al. proposed a wavelet based denoising scheme exploiting wavelet modulus maxima, but the scheme is sensitive to noise. In this study, the authors explore the properties of wavelet phase, with a focus on reconstruction of emission tomography images. Specifically, they show that the wavelet phase of regular Poisson noise under a Haar-type wavelet transform converges in distribution to a random variable uniformly distributed on [0, 2{pi}). They then propose three wavelet-phase-based denoising schemes which exploit this property: edge tracking, local phase variance thresholding, and scale phase variation thresholding. Some numerical results are also presented. The numerical experiments indicate that wavelet phase techniques show promise for wavelet based denoising methods.

  12. A signal invariant wavelet function selection algorithm.

    PubMed

    Garg, Girisha

    2016-04-01

    This paper addresses the problem of mother wavelet selection for wavelet signal processing in feature extraction and pattern recognition. The problem is formulated as an optimization criterion, where a wavelet library is defined using a set of parameters to find the best mother wavelet function. For estimating the fitness function, adopted to evaluate the performance of the wavelet function, analysis of variance is used. Genetic algorithm is exploited to optimize the determination of the best mother wavelet function. For experimental evaluation, solutions for best mother wavelet selection are evaluated on various biomedical signal classification problems, where the solutions of the proposed algorithm are assessed and compared with manual hit-and-trial methods. The results show that the solutions of automated mother wavelet selection algorithm are consistent with the manual selection of wavelet functions. The algorithm is found to be invariant to the type of signals used for classification. PMID:26253283

  13. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  14. Wavelet Transform for Real-Time Detection of Action Potentials in Neural Signals

    PubMed Central

    Quotb, Adam; Bornat, Yannick; Renaud, Sylvie

    2011-01-01

    We present a study on wavelet detection methods of neuronal action potentials (APs). Our final goal is to implement the selected algorithms on custom integrated electronics for on-line processing of neural signals; therefore we take real-time computing as a hard specification and silicon area as a price to pay. Using simulated neural signals including APs, we characterize an efficient wavelet method for AP extraction by evaluating its detection rate and its implementation cost. We compare software implementation for three methods: adaptive threshold, discrete wavelet transform (DWT), and stationary wavelet transform (SWT). We evaluate detection rate and implementation cost for detection functions dynamically comparing a signal with an adaptive threshold proportional to its SD, where the signal is the raw neural signal, respectively: (i) non-processed; (ii) processed by a DWT; (iii) processed by a SWT. We also use different mother wavelets and test different data formats to set an optimal compromise between accuracy and silicon cost. Detection accuracy is evaluated together with false negative and false positive detections. Simulation results show that for on-line AP detection implemented on a configurable digital integrated circuit, APs underneath the noise level can be detected using SWT with a well-selected mother wavelet, combined to an adaptive threshold. PMID:21811455

  15. Heart Disease Detection Using Wavelets

    NASA Astrophysics Data System (ADS)

    González S., A.; Acosta P., J. L.; Sandoval M., M.

    2004-09-01

    We develop a wavelet based method to obtain standardized gray-scale chart of both healthy hearts and of hearts suffering left ventricular hypertrophy. The hypothesis that early bad functioning of heart can be detected must be tested by comparing the wavelet analysis of the corresponding ECD with the limit cases. Several important parameters shall be taken into account such as age, sex and electrolytic changes.

  16. Low-Oscillation Complex Wavelets

    NASA Astrophysics Data System (ADS)

    ADDISON, P. S.; WATSON, J. N.; FENG, T.

    2002-07-01

    In this paper we explore the use of two low-oscillation complex wavelets—Mexican hat and Morlet—as powerful feature detection tools for data analysis. These wavelets, which have been largely ignored to date in the scientific literature, allow for a decomposition which is more “temporal than spectral” in wavelet space. This is shown to be useful for the detection of small amplitude, short duration signal features which are masked by much larger fluctuations. Wavelet transform-based methods employing these wavelets (based on both wavelet ridges and modulus maxima) are developed and applied to sonic echo NDT signals used for the analysis of structural elements. A new mobility scalogram and associated reflectogram is defined for analysis of impulse response characteristics of structural elements and a novel signal compression technique is described in which the pertinent signal information is contained within a few modulus maxima coefficients. As an example of its usefulness, the signal compression method is employed as a pre-processor for a neural network classifier. The authors believe that low oscillation complex wavelets have wide applicability to other practical signal analysis problems. Their possible application to two such problems is discussed briefly—the interrogation of arrhythmic ECG signals and the detection and characterization of coherent structures in turbulent flow fields.

  17. Wavelet analysis in virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Greenblum, Sharon; Li, Jiang; Huang, Adam; Summers, Ronald M.

    2006-03-01

    The computed tomographic colonography (CTC) computer aided detection (CAD) program is a new method in development to detect colon polyps in virtual colonoscopy. While high sensitivity is consistently achieved, additional features are desired to increase specificity. In this paper, a wavelet analysis was applied to CTCCAD outputs in an attempt to filter out false positive detections. 52 CTCCAD detection images were obtained using a screen capture application. 26 of these images were real polyps, confirmed by optical colonoscopy and 26 were false positive detections. A discrete wavelet transform of each image was computed with the MATLAB wavelet toolbox using the Haar wavelet at levels 1-5 in the horizontal, vertical and diagonal directions. From the resulting wavelet coefficients at levels 1-3 for all directions, a 72 feature vector was obtained for each image, consisting of descriptive statistics such as mean, variance, skew, and kurtosis at each level and orientation, as well as error statistics based on a linear predictor of neighboring wavelet coefficients. The vectors for each of the 52 images were then run through a support vector machine (SVM) classifier using ten-fold cross-validation training to determine its efficiency in distinguishing polyps from false positives. The SVM results showed 100% sensitivity and 51% specificity in correctly identifying the status of detections. If this technique were added to the filtering process of the CTCCAD polyp detection scheme, the number of false positive results could be reduced significantly.

  18. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  19. Wavelets in the solution of the volume integral equation: Application to eddy current modeling

    SciTech Connect

    Wang, B.; Moulder, J.C.; Basart, J.P.

    1997-05-01

    There is growing interest in the applications of wavelets as basis functions in solutions of integral equations, especially in the area of electromagnetic field problems. In this article we apply a wavelet expansion to the solution of the three-dimensional eddy current modeling problem based on the volume integral method. Although this method shows promise for eddy current modeling of three-dimensional flaws, it is restricted by the computing power required to solve a large linear system. In this article we show that applying a wavelet basis to the volume integral method can dramatically reduce the size of the linear system to be solved. In our approach, the unknown total field is expressed as a twofold summation of shifted and dilated forms of a properly chosen basis function that is often referred to as the mother wavelet. The wavelet expansion can adaptively fit itself to the total field distribution by distributing the localized functions near the flaw boundary, where the field change is large, and the more spatially diffused functions over the interior of the flaw where the total field tends to be smooth. The approach is thus best suited to modeling large three-dimensional flaws where the large number of elements used in the volume integral method requires extremely large memory space and computational capacity. The feasibility of the wavelet method is discussed in the context of the physical nature of eddy-current modeling problems. Numerical examples using both Haar wavelets and Daubechies compactly supported wavelets with periodic extension are given. The results of the wavelet method are also compared with experimental results from a cylindrical flat-bottom hole in an aluminum plate. These numerical examples and comparisons indicate that the wavelet method can greatly reduce the numerical complexity of the problem with negligible loss in accuracy. {copyright} {ital 1997 American Institute of Physics.}

  20. Triple collocation: beyond three estimates and separation of structural/non-structural errors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the “multiple” collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is s...

  1. On the Effect of Gender and Years of Instruction on Iranian EFL Learners' Collocational Competence

    ERIC Educational Resources Information Center

    Ganji, Mansoor

    2012-01-01

    This study investigates the Iranian EFL learners' Knowledge of Lexical Collocation at three academic levels: freshmen, sophomores, and juniors. The participants were forty three English majors doing their B.A. in English Translation studies in Chabahar Maritime University. They took a 50-item fill-in-the-blank test of lexical collocations. The…

  2. Reforming Triple Collocation: Beyond Three Estimates and Separation of Structural/Non-structural Errors

    NASA Astrophysics Data System (ADS)

    Pan, M.; Zhan, W.; Fisher, C. K.; Crow, W. T.; Wood, E. F.

    2014-12-01

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the multiple collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is slightly different from the original inner product solution but easier to extend to multiple collocation cases. The Pythagorean solution is fully equivalent to the original inner product solution for the triple collocation case. The multiple collocation turns out to be an over-constrained problem and a least squared solution is presented. As the most critical assumption of uncorrelated errors will almost for sure fail in multiple collocation problems, we propose to divide the source estimates into structural categories and treat the structural and non-structural errors separately. Such error separation allows the source estimates to have their structural errors fully correlated within the same structural category, which is much more realistic than the original assumption. A new error assessment procedure is developed which performs the collocation twice, each for one type of errors, and then sums up the two types of errors. The new procedure is also fully backward compatible with the original triple collocation. Error assessment experiments are carried out for surface soil moisture data from multiple remote sensing models, land surface models, and in situ measurements.

  3. Investigating the Viability of a Collocation List for Students of English for Academic Purposes

    ERIC Educational Resources Information Center

    Durrant, Philip

    2009-01-01

    A number of researchers are currently attempting to create listings of important collocations for students of EAP. However, so far these attempts have (1) failed to include positionally-variable collocations, and (2) not taken sufficient account of variation across disciplines. The present paper describes the creation of one listing of…

  4. Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh

    2015-01-01

    Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…

  5. Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations

    ERIC Educational Resources Information Center

    Liu, Dilin

    2010-01-01

    Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…

  6. Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2011-01-01

    This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the…

  7. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  8. Towards a Learner Need-Oriented Second Language Collocation Writing Assistant

    ERIC Educational Resources Information Center

    Ramos, Margarita Alonso; Carlini, Roberto; Codina-Filbà, Joan; Orol, Ana; Vincze, Orsolya; Wanner, Leo

    2015-01-01

    The importance of collocations, i.e. idiosyncratic binary word co-occurrences in the context of second language learning has been repeatedly emphasized by scholars working in the field. Some went even so far as to argue that "vocabulary learning is collocation learning" (Hausmann, 1984, p. 395). Empirical studies confirm this…

  9. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  10. Cross-Linguistic Influence: Its Impact on L2 English Collocation Production

    ERIC Educational Resources Information Center

    Phoocharoensil, Supakorn

    2013-01-01

    This research study investigated the influence of learners' mother tongue on their acquisition of English collocations. Having drawn the linguistic data from two groups of Thai EFL learners differing in English proficiency level, the researcher found that the native language (L1) plays a significant role in the participants' collocation learning…

  11. Study on the Causes and Countermeasures of the Lexical Collocation Mistakes in College English

    ERIC Educational Resources Information Center

    Yan, Hansheng

    2010-01-01

    The lexical collocation in English is an important content in the linguistics theory, and also a research topic which is more and more emphasized in English teaching practice of China. The collocation ability of English decides whether learners could masterly use real English in effective communication. In many years' English teaching practice,…

  12. Collocation, Semantic Prosody, and Near Synonymy: A Cross-Linguistic Perspective

    ERIC Educational Resources Information Center

    Xiao, Richard; McEnery, Tony

    2006-01-01

    This paper explores the collocational behaviour and semantic prosody of near synonyms from a cross-linguistic perspective. The importance of these concepts to language learning is well recognized. Yet while collocation and semantic prosody have recently attracted much interest from researchers studying the English language, there has been little…

  13. ABCs and fourth-order spline collocation for the solution of two-point boundary value problems over an infinite domain

    NASA Astrophysics Data System (ADS)

    Khoury, S.; Ibdah, H.; Sayfy, A.

    2013-10-01

    A mixed approach, based on cubic B-spline collocation and asymptotic boundary conditions (ABCs), is presented for the numerical solution of an extended class of two-point linear boundary value problems (BVPs) over an infinite interval as well as a system of BVPs. The condition at infinity is reduced to an asymptotic boundary condition that approaches the required value at infinity over a large finite interval. The resulting problem is handled using an adaptive spline collocation approach constructed over uniform meshes. The rate of convergence is verified numerically to be of fourth-order. The efficiency and applicability of the method are demonstrated by applying the strategy to a number of examples. The numerical solutions are compared with existing analytical solutions.

  14. Tests for Wavelets as a Basis Set

    NASA Astrophysics Data System (ADS)

    Baker, Thomas; Evenbly, Glen; White, Steven

    A wavelet transformation is a special type of filter usually reserved for image processing and other applications. We develop metrics to evaluate wavelets for general problems on test one-dimensional systems. The goal is to eventually use a wavelet basis in electronic structure calculations. We compare a variety of orthogonal wavelets such as coiflets, symlets, and daubechies wavelets. We also evaluate a new type of orthogonal wavelet with dilation factor three which is both symmetric and compact in real space. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC008696.

  15. General inversion formulas for wavelet transforms

    NASA Astrophysics Data System (ADS)

    Holschneider, Matthias

    1993-09-01

    This article is the continuation of a series of articles about group theory and wavelet analysis [A. Grossmann, J. Morlet, and T. Paul, J. Math. Phys. 26, 2473 (1985)]. As is well-known in the case of the afine group, the reconstruction wavelet and the analyzing wavelet need not be identic. In this article it is shown that this holds for arbitrary groups. In addition it is shown that even for nonadmissible analyzing wavelets the wavelet transform may be inverted. Accordingly the image of the wavelet transform can be characterized by many different reproducing kernels.

  16. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  17. A comparison of boundary and global collocation solutions for K(I) and CMOD calibration functions

    SciTech Connect

    Sanford, R.J.; Kirk, M.T. U.S. Navy, David W. Taylor Naval Ship Research and Development Center, Annapolis, MD )

    1991-03-01

    Global and boundary collocation solutions for K(I), CMOD, and the full-field stress patterns of a single-edge notched tension specimen were compared to determine the accuracy of each technique and the utility of each for determining solutions for the short and the deep crack case. It was demonstrated that inclusion of internal stress conditions in the collocation, i.e., performing a global rather than a boundary collocation solution, expands the range of crack lengths over which accurate results can be obtained. In particular, the global collocation approach provided accurate results for crack lengths between 10 percent and 80 percent of the specimen width for a typical specimen geometry. Comparable accuracy for boundary collocation was found only for crack lengths between 20 percent and 60 percent of the specimen width. 27 refs.

  18. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  19. A corroborative study on improving pitch determination by time-frequency cepstrum decomposition using wavelets.

    PubMed

    Bahja, Fadoua; Di Martino, Joseph; Ibn Elhaj, Elhassan; Aboutajdine, Driss

    2016-01-01

    A new wavelet-based method is presented in this work for estimating and tracking the pitch period. The main idea of the proposed new approach consists in extracting the cepstrum excitation signal and applying on it a wavelet transform whose resulting approximation coefficients are smoothed, for a better pitch determination. Although the principle of the algorithms proposed has already been considered previously, the novelty of our methods relies in the use of powerful wavelet transforms well adapted to pitch determination. The wavelet transforms considered in this article are the discrete wavelet transform and the dual tree complex wavelet transform. This article, by all the provided experimental results, corroborates the idea of decomposing the cepstrum excitation by using wavelet transforms for improving pitch detection. Another interesting point of this article relies in using a simple but efficient voicing decision (which actually improves a similar voicing criterion we proposed in a preceding published study) which on one hand respects the real-time process with low latency and on the other hand allows obtaining low classifications errors. The accuracy of the proposed pitch tracking algorithms has been evaluated using the international Bagshaw and the Keele databases which include male and female speakers. Our various experimental results demonstrate that the proposed methods provide important performance improvements when compared with previously published pitch determination algorithms. PMID:27213131

  20. Group-normalized wavelet packet signal processing

    NASA Astrophysics Data System (ADS)

    Shi, Zhuoer; Bao, Zheng

    1997-04-01

    Since the traditional wavelet and wavelet packet coefficients do not exactly represent the strength of signal components at the very time(space)-frequency tilling, group- normalized wavelet packet transform (GNWPT), is presented for nonlinear signal filtering and extraction from the clutter or noise, together with the space(time)-frequency masking technique. The extended F-entropy improves the performance of GNWPT. For perception-based image, soft-logic masking is emphasized to remove the aliasing with edge preserved. Lawton's method for complex valued wavelets construction is extended to generate the complex valued compactly supported wavelet packets for radar signal extraction. This kind of wavelet packets are symmetry and unitary orthogonal. Well-defined wavelet packets are chosen by the analysis remarks on their time-frequency characteristics. For real valued signal processing, such as images and ECG signal, the compactly supported spline or bi- orthogonal wavelet packets are preferred for perfect de- noising and filtering qualities.

  1. A Mellin transform approach to wavelet analysis

    NASA Astrophysics Data System (ADS)

    Alotta, Gioacchino; Di Paola, Mario; Failla, Giuseppe

    2015-11-01

    The paper proposes a fractional calculus approach to continuous wavelet analysis. Upon introducing a Mellin transform expression of the mother wavelet, it is shown that the wavelet transform of an arbitrary function f(t) can be given a fractional representation involving a suitable number of Riesz integrals of f(t), and corresponding fractional moments of the mother wavelet. This result serves as a basis for an original approach to wavelet analysis of linear systems under arbitrary excitations. In particular, using the proposed fractional representation for the wavelet transform of the excitation, it is found that the wavelet transform of the response can readily be computed by a Mellin transform expression, with fractional moments obtained from a set of algebraic equations whose coefficient matrix applies for any scale a of the wavelet transform. Robustness and computationally efficiency of the proposed approach are shown in the paper.

  2. Local validation of EU-DEM using Least Squares Collocation

    NASA Astrophysics Data System (ADS)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  3. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  4. A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation

    NASA Astrophysics Data System (ADS)

    Doha, Eid; Bhrawy, Ali; Abdelkawy, Mohamed; Hafez, Ramy

    2014-02-01

    This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.

  5. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  6. Optimal spacecraft attitude control using collocation and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Herman, A. L.; Conway, B. A.

    1992-10-01

    Direct collocation with nonlinear programming (DCNLP) is employed to find the optimal open-loop control histories for detumbling a disabled satellite. The controls are torques and forces applied to the docking arm and joint and torques applied about the body axes of the OMV. Solutions are obtained for cases in which various constraints are placed on the controls and in which the number of controls is reduced or increased from that considered in Conway and Widhalm (1986). DCLNP works well when applied to the optimal control problem of satellite attitude control. The formulation is straightforward and produces good results in a relatively small amount of time on a Cray X/MP with no a priori information about the optimal solution. The addition of joint acceleration to the controls significantly reduces the control magnitudes and optimal cost. In all cases, the torques and acclerations are modest and the optimal cost is very modest.

  7. Directional dual-tree complex wavelet packet transform.

    PubMed

    Serbes, Gorkem; Aydin, Nizamettin; Gulcur, Halil Ozcan

    2013-01-01

    Doppler ultrasound systems, which are widely used in cardiovascular disorders detection, have quadrature format outputs. Various types of algorithms were described in literature to process quadrature Doppler signals (QDS), such as phasing filter technique (PFT), fast Fourier transform method, frequency domain Hilbert transform method and complex continuous wavelet transform. However for the discrete wavelet transform (DWT) case, which becomes a common method for processing QDSs, there was not a direct method to recover flow direction from quadrature signals. Traditionally, to process QDSs with DWT, firstly directional signals have to be extracted and later two DWTs must be applied. Although there exists a complex DWT algorithm called dual tree complex discrete wavelet transform (DTCWT), it does not provide directional signal decoding during analysis because of the unwanted energy leaks into its negative frequency bands. Modified DTCWT, which is a combination of PFT and DTCWT, has the capability of extracting directional information while decomposing QDSs into different frequency bands, but it uses an additional Hilbert transform filter and it increases the computational complexity of whole transform. Discrete wavelet packet transform (DWPT), which is a generalization of the ordinary DWT allowing subband analysis without the constraint of dyadic decomposition, can perform an adaptive decomposition of the frequency axis. In this study, a novel complex DWPT, which maps directional information while processing QDSs, is proposed. The success of proposed method will be measured by using simulated quadrature signals. PMID:24110370

  8. Wavelet correlations in the [ital p] model

    SciTech Connect

    Greiner, M. Institut fuer Theoretische Physik, Justus Liebig Universitaet, 35392 Geien ); Lipa, P.; Carruthers, P. )

    1995-03-01

    We suggest applying the concept of wavelet transforms to the study of correlations in multiparticle physics. Both the usual correlation functions as well as the wavelet transformed ones are calculated for the [ital p] model, which is a simple but tractable random cascade model. For this model, the wavelet transform decouples correlations between fluctuations defined on different scales. The advantageous properties of factorial moments are also shared by properly defined factorial wavelet correlations.

  9. On-line handwriting analysis using wavelets

    NASA Astrophysics Data System (ADS)

    Srikantan, Geetha; Srihari, Rohini K.

    1995-09-01

    Speech and Handwriting interfaces to computing devices have received increased attention recently as alternate human-computer media. Automatic recognition of unconstrained handwritten text must be provided as a capability in handwriting computer interfaces. Variation in writing styles of a single writer at different times and between multiple writers makes unconstrained on-line handwriting recognition a challenging task. On-line handwriting is recorded as a sequence of coordinates as the writer's pen moves along the recording device. Isolated character and word recognition have been addressed by several researchers. More recently, attention has been focussed on the recognition of unconstrained text streams. The dynamic changes in handwriting styles observed in everyday use requires development of methods that are adaptive to local variations. We present a novel application of wavelet-based analysis of pen position, velocity and acceleration time-sequences for segmentation and recognition of text components.

  10. Wavelet approach to accelerator problems. 2: Metaplectic wavelets

    SciTech Connect

    Fedorova, A.; Zeitlin, M.; Parsa, Z.

    1997-05-01

    This is the second part of a series of talks in which the authors present applications of wavelet analysis to polynomial approximations for a number of accelerator physics problems. According to the orbit method and by using construction from the geometric quantization theory they construct the symplectic and Poisson structures associated with generalized wavelets by using metaplectic structure and corresponding polarization. The key point is a consideration of semidirect product of Heisenberg group and metaplectic group as subgroup of automorphisms group of dual to symplectic space, which consists of elements acting by affine transformations.

  11. Two-dimensional quantum propagation using wavelets in space and time

    SciTech Connect

    Sparks, Douglas K.; Johnson, Bruce R.

    2006-09-21

    A recent method for solving the time-dependent Schroedinger equation has been developed using expansions in compact-support wavelet bases in both space and time [H. Wang et al., J. Chem. Phys. 121, 7647 (2004)]. This method represents an exact quantum mixed time-frequency approach, with special initial temporal wavelets used to solve the initial value problem. The present work is a first extension of the method to multiple spatial dimensions applied to a simple two-dimensional (2D) coupled anharmonic oscillator problem. A wavelet-discretized version of norm preservation for time-independent Hamiltonians discovered in the earlier one-dimensional investigation is verified to hold as well in 2D and, by implication, in higher numbers of spatial dimensions. The wavelet bases are not restricted to rectangular domains, a fact which is exploited here in a 2D adaptive version of the algorithm.

  12. Wavelet Representation of Contour Sets

    SciTech Connect

    Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I

    2001-07-19

    We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.

  13. A parallel splitting wavelet method for 2D conservation laws

    NASA Astrophysics Data System (ADS)

    Schmidt, Alex A.; Kozakevicius, Alice J.; Jakobsson, Stefan

    2016-06-01

    The current work presents a parallel formulation using the MPI protocol for an adaptive high order finite difference scheme to solve 2D conservation laws. Adaptivity is achieved at each time iteration by the application of an interpolating wavelet transform in each space dimension. High order approximations for the numerical fluxes are computed by ENO and WENO schemes. Since time evolution is made by a TVD Runge-Kutta space splitting scheme, the problem is naturally suitable for parallelization. Numerical simulations and speedup results are presented for Euler equations in gas dynamics problems.

  14. WENO wavelet method for a hyperbolic model of two-phase flow in conservative form

    NASA Astrophysics Data System (ADS)

    Zeidan, Dia; Kozakevicius, Alice J.; Schmidt, Alex A.; Jakobsson, Stefan

    2016-06-01

    The current work presents a WENO wavelet adaptive method for solving multiphase flow problems. The grid adaptivity in each time step is obtained by the application of a thresholded interpolating wavelet transform, which allows the construction of a small yet effective sparse point representation of the solution. The spatial operator is solved by the Lax-Friedrich flux splitting approach in which the flux derivatives are approximated by the WENO scheme. Hyperbolic models of two-phase flow in conservative form are efficiently solved since shocks and rarefaction waves are precisely captured by the chosen methodology. Substantial computational gains are obtained through the grid reduction feature while maintaining the quality of the solutions.

  15. Recent advances in wavelet technology

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.

  16. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  17. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  18. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  19. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  20. Foveated wavelet image quality index

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Bovik, Alan C.; Lu, Ligang; Kouloheris, Jack L.

    2001-12-01

    The human visual system (HVS) is highly non-uniform in sampling, coding, processing and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. Currently, most image quality measurement methods are designed for uniform resolution images. These methods do not correlate well with the perceived foveated image quality. Wavelet analysis delivers a convenient way to simultaneously examine localized spatial as well as frequency information. We developed a new image quality metric called foveated wavelet image quality index (FWQI) in the wavelet transform domain. FWQI considers multiple factors of the HVS, including the spatial variance of the contrast sensitivity function, the spatial variance of the local visual cut-off frequency, the variance of human visual sensitivity in different wavelet subbands, and the influence of the viewing distance on the display resolution and the HVS features. FWQI can be employed for foveated region of interest (ROI) image coding and quality enhancement. We show its effectiveness by using it as a guide for optimal bit assignment of an embedded foveated image coding system. The coding system demonstrates very good coding performance and scalability in terms of foveated objective as well as subjective quality measurement.

  1. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  2. Medical image compression algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin

    2005-02-01

    With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.

  3. Uncertainty Principle and Elementary Wavelet

    NASA Astrophysics Data System (ADS)

    Bliznetsov, M.

    This paper is aimed to define time-and-spectrum characteristics of elementary wavelet. An uncertainty relation between the width of a pulse amplitude spectrum and its time duration and extension in space is investigated in the paper. Analysis of uncertainty relation is carried out for the causal pulses with minimum-phase spectrum. Amplitude spectra of elementary pulses are calculated using modified Fourier spectral analysis. Modification of Fourier analysis is justified by the necessity of solving zero frequency paradox in amplitude spectra that are calculated with the help of standard Fourier anal- ysis. Modified Fourier spectral analysis has the same resolution along the frequency axis and excludes physically unobservable values from time-and-spectral presenta- tions and determines that Heaviside unit step function has infinitely wide spectrum equal to 1 along the whole frequency range. Dirac delta function has the infinitely wide spectrum in the infinitely high frequency scope. Difference in propagation of wave and quasi-wave forms of energy motion is established from the analysis of un- certainty relation. Unidirectional pulse velocity depends on the relative width of the pulse spectra. Oscillating pulse velocity is constant in given nondispersive medium. Elementary wavelet has the maximum relative spectrum width and minimum time du- ration among all the oscillating pulses whose velocity is equal to the velocity of casual harmonic components of the pulse spectra. Relative width of elementary wavelet spec- trum in regard to resonance frequency is square root of 4/3 and approximately equal to 1.1547.... Relative width of this wavelet spectrum in regard to the center frequency is equal to 1. The more relative width of unidirectional pulse spectrum exceeds rela- tive width of elementary wavelet spectrum the higher velocity of unidirectional pulse propagation. The concept of velocity exceeding coefficient is introduced for pulses presenting quasi-wave form of energy

  4. Ninth order block hybrid collocation method for second order ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Yap, Lee Ken; Ismail, Fudziah

    2016-02-01

    A ninth order block hybrid collocation method is proposed for solving general second order ordinary differential equations directly. The derivation involves interpolation and collocation of basic polynomial that generates the main and additional methods. These methods are applied simultaneously to provide approximate solutions at five main points and three off-step points. The stability properties of the block method are discussed. Some illustrative examples are given to demonstrate the efficiency of the method.

  5. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  6. Multi-element probabilistic collocation method in high dimensions

    SciTech Connect

    Foo, Jasmine; Karniadakis, George Em

    2010-03-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order {mu} and the effective dimension {nu}, with {nu}<=}{mu} for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  7. Recent advances in (soil moisture) triple collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Zwieback, S.; Crow, W.; Dorigo, W.; Wagner, W.

    2016-03-01

    To date, triple collocation (TC) analysis is one of the most important methods for the global-scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method. Different notations that are used to formulate the TC problem are shown to be mathematically identical. While many studies have investigated issues related to possible violations of the underlying assumptions, only few TC modifications have been proposed to mitigate the impact of these violations. Moreover, assumptions, which are often understood as a limitation that is unique to TC analysis are shown to be common also to other conventional performance metrics. Noteworthy advances in TC analysis have been made in the way error estimates are being presented by moving from the investigation of absolute error variance estimates to the investigation of signal-to-noise ratio (SNR) metrics. Here we review existing error presentations and propose the combined investigation of the SNR (expressed in logarithmic units), the unscaled error variances, and the soil moisture sensitivities of the data sets as an optimal strategy for the evaluation of remotely-sensed soil moisture data sets.

  8. An auroral scintillation observation using precise, collocated GPS receivers

    NASA Astrophysics Data System (ADS)

    Garner, T. W.; Harris, R. B.; York, J. A.; Herbster, C. S.; Minter, C. F., III; Hampton, D. L.

    2011-02-01

    On 10 January 2009, an unusual ionospheric scintillation event was observed by a Global Positioning System (GPS) receiver station in Fairbanks, Alaska. The receiver station is part of the National Geospatial-Intelligence Agency's (NGA) Monitoring Station Network (MSN). Each MSN station runs two identical geodetic-grade, dual-frequency, full-code tracking GPS receivers that share a common antenna. At the Fairbanks station, a third separate receiver with a separate antenna is located nearby. During the 10 January event, ionospheric conditions caused two of the receivers to loose lock on a single satellite. The third receiver tracked through the scintillation. The region of scintillation was collocated with an auroral arc and a slant total electron content (TEC) increase of 5.71 TECu (TECu = 1016/m2). The response of the full-code tracking receivers to the scintillation is intriguing. One of these receivers lost lock, but the other receiver did not. This fact argues that a receiver's internal state dictates its reaction to scintillation. Additionally, the scintillation only affected the L2 signal. While this causes the L1 signal to be lost on the semicodelessly receiver, the full-code tracking receiver only lost the L1 signal when the receiver attempted to reacquire the satellite link.

  9. Collocated Dataglyphs for large-message storage and retrieval

    NASA Astrophysics Data System (ADS)

    Motwani, Rakhi C.; Breidenbach, Jeff A.; Black, John R.

    2004-06-01

    In contrast to the security and integrity of electronic files, printed documents are vulnerable to damage and forgery due to their physical nature. Researchers at Palo Alto Research Center utilize DataGlyph technology to render digital characteristics to printed documents, which provides them with the facility of tamper-proof authentication and damage resistance. This DataGlyph document is known as GlyphSeal. Limited DataGlyph carrying capacity per printed page restricted the application of this technology to a domain of graphically simple and small-sized single-paged documents. In this paper the authors design a protocol motivated by techniques from the networking domain and back-up strategies, which extends the GlyphSeal technology to larger-sized, graphically complex, multi-page documents. This protocol provides fragmentation, sequencing and data loss recovery. The Collocated DataGlyph Protocol renders large glyph messages onto multiple printed pages and recovers the glyph data from rescanned versions of the multi-page documents, even when pages are missing, reordered or damaged. The novelty of this protocol is the application of ideas from RAID to the domain of DataGlyphs. The current revision of this protocol is capable of generating at most 255 pages, if page recovery is desired and does not provide enough data density to store highly detailed images in a reasonable amount of page space.

  10. Lunar soft landing rapid trajectory optimization using direct collocation method and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Tu, Lianghui; Yuan, Jianping; Luo, Jianjun; Ning, Xin; Zhou, Ruiwu

    2007-11-01

    Direct collocation method has been widely used for trajectory optimization. In this paper, the application of direct optimization method (direct collocation method & nonlinear programming (NLP)) to lunar probe soft-landing trajectory optimization is introduced. Firstly, the model of trajectory optimization control problem to lunar probe soft landing trajectory is established and the equations of motion are simplified respectively based on some reasonable hypotheses. Performance is selected to minimize the fuel consumption. The control variables are thrust attack angle and thrust of engine. Terminal state variable constraints are velocity and altitude constraints. Then, the optimal control problem is transformed into nonlinear programming problem using direct collocation method. The state variables and control variables are selected as optimal parameters at all nodes and collocation nodes. Parameter optimization problem is solved using the SNOPT software package. The simulation results demonstrate that the direct collocation method is not sensitive to lunar soft landing initial conditions; they also show that the optimal solutions of trajectory optimization problem are fairly good in real-time. Therefore, the direct collocation method is a viable approach to lunar probe soft landing trajectory optimization problem.

  11. Noise reduction of time domain electromagnetic data: Application of a combined wavelet denoising method

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yuan, Guiyang; Lin, Jun; Du, Shangyu; Xie, Lijun; Wang, Yuan

    2016-06-01

    A denoising method based on wavelet analysis is presented for the removal of noise (background noise and random spike) from time domain electromagnetic (TEM) data. This method includes two signal processing technologies: wavelet threshold method and stationary wavelet transform. First, wavelet threshold method is used for the removal of background noise from TEM data. Then, the data are divided into a series of details and approximations by using stationary wavelet transform. The random spike in details is identified by zero reference data and adaptive energy detector. Next, the corresponding details are processed to suppress the random spike. The denoised TEM data are reconstructed via inverse stationary wavelet transform using the processed details at each level and the approximations at the highest level. The proposed method has been verified using a synthetic TEM data, the signal-to-noise ratio of synthetic TEM data is increased from 10.97 dB to 24.37 dB at last. This method is also applied to the noise suppression of the field data which were collected at Hengsha island, China. The section image results shown that the noise is suppressed effectively and the resolution of the deep anomaly is obviously improved.

  12. Design Methodology of a New Wavelet Basis Function for Fetal Phonocardiographic Signals

    PubMed Central

    Chourasia, Vijay S.; Tiwari, Anil Kumar

    2013-01-01

    Fetal phonocardiography (fPCG) based antenatal care system is economical and has a potential to use for long-term monitoring due to noninvasive nature of the system. The main limitation of this technique is that noise gets superimposed on the useful signal during its acquisition and transmission. Conventional filtering may result into loss of valuable diagnostic information from these signals. This calls for a robust, versatile, and adaptable denoising method applicable in different operative circumstances. In this work, a novel algorithm based on wavelet transform has been developed for denoising of fPCG signals. Successful implementation of wavelet theory in denoising is heavily dependent on selection of suitable wavelet basis function. This work introduces a new mother wavelet basis function for denoising of fPCG signals. The performance of newly developed wavelet is found to be better when compared with the existing wavelets. For this purpose, a two-channel filter bank, based on characteristics of fPCG signal, is designed. The resultant denoised fPCG signals retain the important diagnostic information contained in the original fPCG signal. PMID:23766693

  13. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  14. Portal imaging: Performance improvement in noise reduction by means of wavelet processing.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Larrey-Ruiz, Jorge; Bastida-Jumilla, María-Consuelo; Verdú-Monedero, Rafael

    2016-01-01

    This paper discusses the suitability, in terms of noise reduction, of various methods which can be applied to an image type often used in radiation therapy: the portal image. Among these methods, the analysis focuses on those operating in the wavelet domain. Wavelet-based methods tested on natural images--such as the thresholding of the wavelet coefficients, the minimization of the Stein unbiased risk estimator on a linear expansion of thresholds (SURE-LET), and the Bayes least-squares method using as a prior a Gaussian scale mixture (BLS-GSM method)--are compared with other methods that operate on the image domain--an adaptive Wiener filter and a nonlocal mean filter (NLM). For the assessment of the performance, the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), the Pearson correlation coefficient, and the Spearman rank correlation (ρ) coefficient are used. The performance of the wavelet filters and the NLM method are similar, but wavelet filters outperform the Wiener filter in terms of portal image denoising. It is shown how BLS-GSM and NLM filters produce the smoothest image, while keeping soft-tissue and bone contrast. As for the computational cost, filters using a decimated wavelet transform (decimated thresholding and SURE-LET) turn out to be the most efficient, with calculation times around 1 s. PMID:26602966

  15. Optical wavelet transform for fingerprint identification

    NASA Astrophysics Data System (ADS)

    MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.

    1994-03-01

    The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.

  16. A kurtosis-based wavelet algorithm for motion artifact correction of fNIRS data.

    PubMed

    Chiarelli, Antonio M; Maclin, Edward L; Fabiani, Monica; Gratton, Gabriele

    2015-05-15

    Movements are a major source of artifacts in functional Near-Infrared Spectroscopy (fNIRS). Several algorithms have been developed for motion artifact correction of fNIRS data, including Principal Component Analysis (PCA), targeted Principal Component Analysis (tPCA), Spline Interpolation (SI), and Wavelet Filtering (WF). WF is based on removing wavelets with coefficients deemed to be outliers based on their standardized scores, and it has proven to be effective on both synthetized and real data. However, when the SNR is high, it can lead to a reduction of signal amplitude. This may occur because standardized scores inherently adapt to the noise level, independently of the shape of the distribution of the wavelet coefficients. Higher-order moments of the wavelet coefficient distribution may provide a more diagnostic index of wavelet distribution abnormality than its variance. Here we introduce a new procedure that relies on eliminating wavelets that contribute to generate a large fourth-moment (i.e., kurtosis) of the coefficient distribution to define "outliers" wavelets (kurtosis-based Wavelet Filtering, kbWF). We tested kbWF by comparing it with other existing procedures, using simulated functional hemodynamic responses added to real resting-state fNIRS recordings. These simulations show that kbWF is highly effective in eliminating transient noise, yielding results with higher SNR than other existing methods over a wide range of signal and noise amplitudes. This is because: (1) the procedure is iterative; and (2) kurtosis is more diagnostic than variance in identifying outliers. However, kbWF does not eliminate slow components of artifacts whose duration is comparable to the total recording time. PMID:25747916

  17. An alternative local collocation strategy for high-convergence meshless PDE solutions, using radial basis functions

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Power, H.; Meng, C. Y.; Howard, D.; Cliffe, K. A.

    2013-12-01

    This work proposes an alternative decomposition for local scalable meshless RBF collocation. The proposed method operates on a dataset of scattered nodes that are placed within the solution domain and on the solution boundary, forming a small RBF collocation system around each internal node. Unlike other meshless local RBF formulations that are based on a generalised finite difference (RBF-FD) principle, in the proposed "finite collocation" method the solution of the PDE is driven entirely by collocation of PDE governing and boundary operators within the local systems. A sparse global collocation system is obtained not by enforcing the PDE governing operator, but by assembling the value of the field variable in terms of the field value at neighbouring nodes. In analogy to full-domain RBF collocation systems, communication between stencils occurs only over the stencil periphery, allowing the PDE governing operator to be collocated in an uninterrupted manner within the stencil interior. The local collocation of the PDE governing operator allows the method to operate on centred stencils in the presence of strong convective fields; the reconstruction weights assigned to nodes in the stencils being automatically adjusted to represent the flow of information as dictated by the problem physics. This "implicit upwinding" effect mitigates the need for ad-hoc upwinding stencils in convective dominant problems. Boundary conditions are also enforced within the local collocation systems, allowing arbitrary boundary operators to be imposed naturally within the solution construction. The performance of the method is assessed using a large number of numerical examples with two steady PDEs; the convection-diffusion equation, and the Lamé-Navier equations for linear elasticity. The method exhibits high-order convergence in each case tested (greater than sixth order), and the use of centred stencils is demonstrated for convective-dominant problems. In the case of linear elasticity

  18. Wavelet analysis of internal gravity waves

    NASA Astrophysics Data System (ADS)

    Hawkins, J.; Warn-Varnas, A.; Chin-Bing, S.; King, D.; Smolarkiewicsz, P.

    2005-05-01

    A series of model studies of internal gravity waves (igw) have been conducted for several regions of interest. Dispersion relations from the results have been computed using wavelet analysis as described by Meyers (1993). The wavelet transform is repeatedly applied over time and the components are evaluated with respect to their amplitude and peak position (Torrence and Compo, 1998). In this sense we have been able to compute dispersion relations from model results and from measured data. Qualitative agreement has been obtained in some cases. The results from wavelet analysis must be carefully interpreted because the igw models are fully nonlinear and wavelet analysis is fundamentally a linear technique. Nevertheless, a great deal of information describing igw propagation can be obtained from the wavelet transform. We address the domains over which wavelet analysis techniques can be applied and discuss the limits of their applicability.

  19. On the wavelet optimized finite difference method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1994-01-01

    When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.

  20. Wavelet Analysis of Soil Reflectance for the Characterization of Soil Properties

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Wavelet analysis has proven to be effective in many fields including signal processing and digital image analysis. Recently, it has been adapted to spectroscopy, where the reflectance of various materials is measured with respect to wavelength (nm) or wave number (cm-1). Spectra can cover broad wave...

  1. Wavelet analysis in two-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Burkovets, Dimitry N.

    2002-02-01

    The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.

  2. Wavelet analysis of epileptic spikes

    NASA Astrophysics Data System (ADS)

    Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-05-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  3. Wavelet transforms for optical pulse analysis.

    PubMed

    Vázquez, Javier Molina; Mazilu, Michael; Miller, Alan; Galbraith, Ian

    2005-12-01

    An exploration of wavelet transforms for ultrashort optical pulse characterization is given. Some of the most common wavelets are examined to determine the advantages of using the causal quasi-wavelet suggested in Proceedings of the LEOS 15th Annual Meeting (IEEE, 2002), Vol. 2, p. 592, in terms of pulse analysis and, in particular, chirp extraction. Owing to its ability to distinguish between past and future pulse information, the causal quasi-wavelet is found to be highly suitable for optical pulse characterization. PMID:16396051

  4. Entangled Husimi Distribution and Complex Wavelet Transformation

    NASA Astrophysics Data System (ADS)

    Hu, Li-Yun; Fan, Hong-Yi

    2010-05-01

    Similar in spirit to the preceding work (Int. J. Theor. Phys. 48:1539, 2009) where the relationship between wavelet transformation and Husimi distribution function is revealed, we study this kind of relationship to the entangled case. We find that the optical complex wavelet transformation can be used to study the entangled Husimi distribution function in phase space theory of quantum optics. We prove that, up to a Gaussian function, the entangled Husimi distribution function of a two-mode quantum state | ψ> is just the modulus square of the complex wavelet transform of e^{-\\vert η \\vert 2/2} with ψ( η) being the mother wavelet.

  5. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  6. A wavelet-based feature vector model for DNA clustering.

    PubMed

    Bao, J P; Yuan, R Y

    2015-01-01

    DNA data are important in the bioinformatic domain. To extract useful information from the enormous collection of DNA sequences, DNA clustering is often adopted to efficiently deal with DNA data. The alignment-free method is a very popular way of creating feature vectors from DNA sequences, which are then used to compare DNA similarities. This paper proposes a wavelet-based feature vector (WFV) model, which is also an alignment-free method. From the perspective of signal processing, a DNA sequence is a sequence of digital signals. However, most traditional alignment-free models only extract features in the time domain. The WFV model uses discrete wavelet transform to adaptively yield feature vectors with a fixed dimension based on the features in both the time and frequency domains. The level of wavelet transform is adjusted according to the length of the DNA sequence rather than a fixed manually set value. The WFV model prefers a 32-dimension feature vector, which greatly promotes system performance. We compared the WFV model with the other five alignment-free models, i.e., k-tuple, DMK, TSM, AMI, and CV, on several large-scale DNA datasets on the DNA clustering application by means of the K-means algorithm. The experimental results showed that the WFV model outperformed the other models in terms of both the clustering results and the running time. PMID:26782569

  7. THE LOSS OF ACCURACY OF STOCHASTIC COLLOCATION METHOD IN SOLVING NONLINEAR DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S

    2013-01-01

    n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.

  8. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  9. Oncologic image compression using both wavelet and masking techniques.

    PubMed

    Yin, F F; Gao, Q

    1997-12-01

    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown. PMID:9434988

  10. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems. PMID:24880269

  11. Daubechies wavelets for linear scaling density functional theory

    SciTech Connect

    Mohr, Stephan; Ratcliff, Laura E.; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Boulanger, Paul; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10 000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  12. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  13. Uncertainty quantification for unsaturated flow in porous media: a stochastic collocation method

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, D. M.

    2011-12-01

    We present a stochastic collocation (SC) method to quantify epistemic uncertainty in predictions of unsaturated flow in porous media. SC provides a non-intrusive framework for uncertainty propagation in models based on the non-linear Richards' equation with arbitrary constitutive laws describing soil properties (relative conductivity and retention curve). To illustrate the approach, we use the Richards' equation with the van Genutchen-Mualem model for water retention and relative conductivity to describe infiltration into an initially dry soil whose uncertain parameters are treated as random fields. These parameters are represented using a truncated Karhunen-Loève expansion; Smolyak algorithm is used to construct a structured set of collocation points from univariate Gauss quadrature rules. A resulting deterministic problem is solved for each collocation point, and together with the collocation weights, the statistics of hydraulic head and infiltration rate are computed. The results are in agreement with Monte Carlo simulations. We demonstrate that highly heterogeneous soils (large variances of hydraulic parameters) require cubature formulas of high degree of exactness, while their short correlation lengths increase the dimensionality of the problem. Both effects increase the number of collocation points and thus of deterministic problems to solve, affecting the overall computational cost of uncertainty quantification.

  14. A space-time collocation scheme for modified anomalous subdiffusion and nonlinear superdiffusion equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.

    2016-01-01

    This paper reports a new spectral collocation technique for solving time-space modified anomalous subdiffusion equation with a nonlinear source term subject to Dirichlet and Neumann boundary conditions. This model equation governs the evolution for the probability density function that describes anomalously diffusing particles. Anomalous diffusion is ubiquitous in physical and biological systems where trapping and binding of particles can occur. A space-time Jacobi collocation scheme is investigated for solving such problem. The main advantage of the proposed scheme is that, the shifted Jacobi Gauss-Lobatto collocation and shifted Jacobi Gauss-Radau collocation approximations are employed for spatial and temporal discretizations, respectively. Thereby, the problem is successfully reduced to a system of algebraic equations. The numerical results obtained by this algorithm have been compared with various numerical methods in order to demonstrate the high accuracy and efficiency of the proposed method. Indeed, for relatively limited number of Gauss-Lobatto and Gauss-Radau collocation nodes imposed, the absolute error in our numerical solutions is sufficiently small. The results have been compared with other techniques in order to demonstrate the high accuracy and efficiency of the proposed method.

  15. Wavelet-based semi-automatic live-wire segmentation

    NASA Astrophysics Data System (ADS)

    Haenselmann, Thomas; Effelsberg, Wolfgang

    2003-06-01

    The live-wire approach is a well-known algorithm based on a graph search to locate boundaries for image segmentation. We will extend the original cost function, which is solely based on finding strong edges, so that the approach can take a large variety of boundaries into account. The cost function adapts to the local characteristics of a boundary by analyzing a user-defined sample using a continuous wavelet decomposition. We will finally extend the approach into 3D in order to segment objects in volumetric data, e. g., from medical CT and MR scans.

  16. Miniaturized Multi-Band Antenna via Element Collocation

    SciTech Connect

    Martin, R P

    2012-06-01

    The resonant frequency of a microstrip patch antenna may be reduced through the addition of slots in the radiating element. Expanding upon this concept in favor of a significant reduction in the tuned width of the radiator, nearly 60% of the antenna metallization is removed, as seen in the top view of the antenna’s radiating element (shown in red, below, left). To facilitate an increase in the gain of the antenna, the radiator is suspended over the ground plane (green) by an air substrate at a height of 0.250" while being mechanically supported by 0.030" thick Rogers RO4003 laminate in the same profile as the element. Although the entire surface of the antenna (red) provides 2.45 GHz operation with insignificant negative effects on performance after material removal, the smaller square microstrip in the middle must be isolated from the additional aperture in order to afford higher frequency operation. A low insertion loss path centered at 2.45 GHz may simultaneously provide considerable attenuation at additional frequencies through the implementation of a series-parallel, resonant reactive path. However, an inductive reactance alone will not permit lower frequency energy to propagate across the intended discontinuity. To mitigate this, a capacitance is introduced in series with the inductor, generating a resonance at 2.45 GHz with minimum forward transmission loss. Four of these reactive pairs are placed between the coplanar elements as shown. Therefore, the aperture of the lower-frequency outer segment includes the smaller radiator while the higher frequency section is isolated from the additional material. In order to avoid cross-polarization losses due to the orientation of a transmitter or receiver in reference to the antenna, circular polarization is realized by a quadrature coupler for each collocated antenna as seen in the bottom view of the antenna (right). To generate electromagnetic radiation concentrically rotating about the direction of propagation

  17. Wavelet analysis and high quality JPEG2000 compression using Daubechies wavelet

    NASA Astrophysics Data System (ADS)

    Khalid, Azra; Afsheen, Uzma; Umer Baig, Saad

    2011-10-01

    Wavelet analysis and its application has found much attention in recent times. It is vastly applied in many applications such as involving transient signal analysis, image processing, signal processing and data compression. It has gained popularity because of its multiresolution, subband coding and feature extraction features. The paper describes efficient application of wavelet analysis for image compression, exploring Daubechies wavelet as the basis function. Wavelets have scaling properties. They are localized in time and frequency. Wavelets separate the image into different scales on the basis of frequency content. The resulting compressed image can then be easily stored or transmitted saving crucial communication bandwidth. Wavelet analysis because of its high quality compression is one of the feature blocks in the new JPEG2000 image compression standard. The paper proposes Daubechies wavelet analysis, quantization and Huffman encoding scheme which results in high compression and good quality reconstruction.

  18. The cross wavelet and wavelet coherence analysis of spatio-temporal rainfall-groundwater system in Pingtung plain, Taiwan

    NASA Astrophysics Data System (ADS)

    Lin, Yuan-Chien; Yu, Hwa-Lung

    2013-04-01

    The increasing frequency and intensity of extreme rainfall events has been observed recently in Taiwan. Particularly, Typhoon Morakot, Typhoon Fanapi, and Typhoon Megi consecutively brought record-breaking intensity and magnitude of rainfalls to different locations of Taiwan in these two years. However, records show the extreme rainfall events did not elevate the amount of annual rainfall accordingly. Conversely, the increasing frequency of droughts has also been occurring in Taiwan. The challenges have been confronted by governmental agencies and scientific communities to come up with effective adaptation strategies for natural disaster reduction and sustainable environment establishment. Groundwater has long been a reliable water source for a variety of domestic, agricultural, and industrial uses because of its stable quantity and quality. In Taiwan, groundwater accounts for the largest proportion of all water resources for about 40%. This study plans to identify and quantify the nonlinear relationship between precipitation and groundwater recharge, find the non-stationary time-frequency relations between the variations of rainfall and groundwater levels to understand the phase difference of time series. Groundwater level data and over-50-years hourly rainfall records obtained from 20 weather stations in Pingtung Plain, Taiwan has been collected. Extract the space-time pattern by EOF method, which is a decomposition of a signal or data set in terms of orthogonal basis functions determined from the data for both time series and spatial patterns, to identify the important spatial pattern of groundwater recharge and using cross wavelet and wavelet coherence method to identify the relationship between rainfall and groundwater levels. Results show that EOF method can specify the spatial-temporal patterns which represents certain geological characteristics and other mechanisms of groundwater, and the wavelet coherence method can identify general correlation between

  19. Image registration using redundant wavelet transforms

    NASA Astrophysics Data System (ADS)

    Brown, Richard K.; Claypoole, Roger L., Jr.

    2001-12-01

    Imagery is collected much faster and in significantly greater quantities today compared to a few years ago. Accurate registration of this imagery is vital for comparing the similarities and differences between multiple images. Image registration is a significant component in computer vision and other pattern recognition problems, medical applications such as Medical Resonance Images (MRI) and Positron Emission Tomography (PET), remotely sensed data for target location and identification, and super-resolution algorithms. Since human analysis is tedious and error prone for large data sets, we require an automatic, efficient, robust, and accurate method to register images. Wavelet transforms have proven useful for a variety of signal and image processing tasks. In our research, we present a fundamentally new wavelet-based registration algorithm utilizing redundant transforms and a masking process to suppress the adverse effects of noise and improve processing efficiency. The shift-invariant wavelet transform is applied in translation estimation and a new rotation-invariant polar wavelet transform is effectively utilized in rotation estimation. We demonstrate the robustness of these redundant wavelet transforms for the registration of two images (i.e., translating or rotating an input image to a reference image), but extensions to larger data sets are feasible. We compare the registration accuracy of our redundant wavelet transforms to the critically sampled discrete wavelet transform using the Daubechies wavelet to illustrate the power of our algorithm in the presence of significant additive white Gaussian noise and strongly translated or rotated images.

  20. 2-D wavelet with position controlled resolution

    NASA Astrophysics Data System (ADS)

    Walczak, Andrzej; Puzio, Leszek

    2005-09-01

    Wavelet transformation localizes all irregularities in the scene. It is most effective in the case when intensities in the scene have no sharp details. It is the case often present in a medical imaging. To identify the shape one has to extract it from the scene as typical irregularity. When the scene does not contain sharp changes then common differential filters are not efficient tool for a shape extraction. The new 2-D wavelet for such task has been proposed. Described wavelet transform is axially symmetric and has varied scale in dependence on the distance from the centre of the wavelet symmetry. The analytical form of the wavelet has been presented as well as its application for details extraction in the scene. Most important feature of the wavelet transform is that it gives a multi-scale transformation, and if zoom is on the wavelet selectivity varies proportionally to the zoom step. As a result, the extracted shape does not change during zoom operation. What is more the wavelet selectivity can be fit to the local intensity gradient properly to obtain best extraction of the irregularities.

  1. On the Stability of Collocated Controllers in the Presence or Uncertain Nonlinearities and Other Perils

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1985-01-01

    Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.

  2. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    SciTech Connect

    Liu, Yi-Xin Zhang, Hong-Dong

    2014-06-14

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  3. Accuracy of the collocated transfer standard method for wind instrument auditing

    NASA Astrophysics Data System (ADS)

    Lockhart, Thomas J.

    1989-08-01

    The application of collocated data collection for the purpose of estimating the accuracy of an operating wind instrument requires some baseline demonstrating the best agreement which can be expected. A series of data were carefully taken in 1982 from six different collocated wind instruments. The published reports of these data suggest that the best agreement from averaged wind-speed measurements will be between 0.3 and 0.5 m/s, and for wind direction it will be 4 to 6 degrees. A new analysis of the same data reduces the best expected agreement to about 0.2 m/s and 2 degrees. The several reasons for claiming the better potential accuracy for collocated measurement (auditing) with calibrated transfer standard instruments are discussed.

  4. Using wavelets to learn pattern templates

    NASA Astrophysics Data System (ADS)

    Scott, Clayton D.; Nowak, Robert D.

    2002-07-01

    Despite the success of wavelet decompositions in other areas of statistical signal and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown translation and rotation. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR (Template Learning from Atomic Representations), a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. We discuss several applications, including template learning, pattern classification, and image registration.

  5. Critically sampled wavelets with composite dilations.

    PubMed

    Easley, Glenn R; Labate, Demetrio

    2012-02-01

    Wavelets with composite dilations provide a general framework for the construction of waveforms defined not only at various scales and locations, as traditional wavelets, but also at various orientations and with different scaling factors in each coordinate. As a result, they are useful to analyze the geometric information that often dominate multidimensional data much more efficiently than traditional wavelets. The shearlet system, for example, is a particular well-known realization of this framework, which provides optimally sparse representations of images with edges. In this paper, we further investigate the constructions derived from this approach to develop critically sampled wavelets with composite dilations for the purpose of image coding. Not only do we show that many nonredundant directional constructions recently introduced in the literature can be derived within this setting, but we also introduce new critically sampled discrete transforms that achieve much better nonlinear approximation rates than traditional discrete wavelet transforms and outperform the other critically sampled multiscale transforms recently proposed. PMID:21843993

  6. A wavelet based algorithm for the identification of oscillatory event-related potential components.

    PubMed

    Aniyan, Arun Kumar; Philip, Ninan Sajeeth; Samar, Vincent J; Desjardins, James A; Segalowitz, Sidney J

    2014-08-15

    Event related potentials (ERPs) are very feeble alterations in the ongoing electroencephalogram (EEG) and their detection is a challenging problem. Based on the unique time-based parameters derived from wavelet coefficients and the asymmetry property of wavelets a novel algorithm to separate ERP components in single-trial EEG data is described. Though illustrated as a specific application to N170 ERP detection, the algorithm is a generalized approach that can be easily adapted to isolate different kinds of ERP components. The algorithm detected the N170 ERP component with a high level of accuracy. We demonstrate that the asymmetry method is more accurate than the matching wavelet algorithm and t-CWT method by 48.67 and 8.03 percent, respectively. This paper provides an off-line demonstration of the algorithm and considers issues related to the extension of the algorithm to real-time applications. PMID:24931710

  7. Best parameters selection for wavelet packet-based compression of magnetic resonance images.

    PubMed

    Abu-Rezq, A N; Tolba, A S; Khuwaja, G A; Foda, S G

    1999-10-01

    Transmission of compressed medical images is becoming a vital tool in telemedicine. Thus new methods are needed for efficient image compression. This study discovers the best design parameters for a data compression scheme applied to digital magnetic resonance (MR) images. The proposed technique aims at reducing the transmission cost while preserving the diagnostic information. By selecting the wavelet packet's filters, decomposition level, and subbands that are better adapted to the frequency characteristics of the image, one may achieve better image representation in the sense of lower entropy or minimal distortion. Experimental results show that the selection of the best parameters has a dramatic effect on the data compression rate of MR images. In all cases, decomposition at three or four levels with the Coiflet 5 wavelet (Coif 5) results in better compression performance than the other wavelets. Image resolution is found to have a remarkable effect on the compression rate. PMID:10529302

  8. CW-THz image contrast enhancement using wavelet transform and Retinex

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Zhang, Min; Hu, Qi-fan; Huang, Ying-Xue; Liang, Hua-Wei

    2015-10-01

    To enhance continuous wave terahertz (CW-THz) scanning images contrast and denoising, a method based on wavelet transform and Retinex theory was proposed. In this paper, the factors affecting the quality of CW-THz images were analysed. Second, an approach of combination of the discrete wavelet transform (DWT) and a designed nonlinear function in wavelet domain for the purpose of contrast enhancing was applied. Then, we combine the Retinex algorithm for further contrast enhancement. To evaluate the effectiveness of the proposed method in qualitative and quantitative, it was compared with the adaptive histogram equalization method, the homomorphic filtering method and the SSR(Single-Scale-Retinex) method. Experimental results demonstrated that the presented algorithm can effectively enhance the contrast of CW-THZ image and obtain better visual effect.

  9. [Ultrasound image de-noising based on nonlinear diffusion of complex wavelet transform].

    PubMed

    Hou, Wen; Wu, Yiquan

    2012-04-01

    Ultrasound images are easily corrupted by speckle noise, which limits its further application in medical diagnoses. An image de-noising method combining dual-tree complex wavelet transform (DT-CWT) with nonlinear diffusion is proposed in this paper. Firstly, an image is decomposed by DT-CWT. Then adaptive-contrast-factor diffusion and total variation diffusion are applied to high-frequency component and low-frequency component, respectively. Finally the image is synthesized. The experimental results are given. The comparisons of the image de-noising results are made with those of the image de-noising methods based on the combination of wavelet shrinkage with total variation diffusion, the combination of wavelet/multiwavelet with nonlinear diffusion. It is shown that the proposed image de-noising method based on DT-CWT and nonlinear diffusion can obtain superior results. It can both remove speckle noise and preserve the original edges and textural features more efficiently. PMID:22616185

  10. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  11. Wavelet packets feasibility study for the design of an ECG compressor.

    PubMed

    Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Godino-Llorente, Juan Ignacio; Barner, Kenneth E

    2007-04-01

    Most of the recent electrocardiogram (ECG) compression approaches developed with the wavelet transform are implemented using the discrete wavelet transform. Conversely, wavelet packets (WP) are not extensively used, although they are an adaptive decomposition for representing signals. In this paper, we present a thresholding-based method to encode ECG signals using WP. The design of the compressor has been carried out according to two main goals: (1) The scheme should be simple to allow real-time implementation; (2) quality, i.e., the reconstructed signal should be as similar as possible to the original signal. The proposed scheme is versatile as far as neither QRS detection nor a priori signal information is required. As such, it can thus be applied to any ECG. Results show that WP perform efficiently and can now be considered as an alternative in ECG compression applications. PMID:17405386

  12. Wavelet Analysis of Umbral Oscillations

    NASA Astrophysics Data System (ADS)

    Christopoulou, E. B.; Skodras, A.; Georgakilas, A. A.; Koutchmy, S.

    2003-07-01

    We study the temporal behavior of the intensity and velocity chromospheric umbral oscillations, applying wavelet analysis techniques to four sets of observations in the Hα line and one set of simultaneous observations in the Hα and the nonmagnetic Fe I (5576.099 Å) line. The wavelet and Fourier power spectra of the intensity and the velocity at chromospheric levels show both 3 and 5 minute oscillations. Oscillations in the 5 minute band are prominent in the intensity power spectra; they are significantly reduced in the velocity power spectra. We observe multiple peaks of closely spaced cospatial frequencies in the 3 minute band (5-8 mHz). Typically, there are three oscillating modes present: (1) a major one near 5.5 mHz, (2) a secondary near 6.3 mHz, and (3) oscillations with time-varying frequencies around 7.5 mHz that are present for limited time intervals. In the frame of current theories, the oscillating mode near 5.5 mHz should be considered as a fingerprint of the photospheric resonator, while the other two modes can be better explained by the chromospheric resonator. The wavelet spectra show a dynamic temporal behavior of the 3 minute oscillations. We observed (1) frequency drifts, (2) modes that are stable over a long time and then fade away or split up into two oscillation modes, and (3) suppression of frequencies for short time intervals. This behavior can be explained by the coupling between modes closely spaced in frequency or/and by long-term variations of the driving source of the resonators. Based on observations performed on the National Solar Observatory/Sacramento Peak Observatory Richard B. Dunn Solar Telescope (DST) and on the Big Bear Solar Observatory Harold Zirin Telescope.

  13. Wavelet Algorithms for Illumination Computations

    NASA Astrophysics Data System (ADS)

    Schroder, Peter

    One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.

  14. Reservoir characterization using wavelet transforms

    NASA Astrophysics Data System (ADS)

    Rivera Vega, Nestor

    Automated detection of geological boundaries and determination of cyclic events controlling deposition can facilitate stratigraphic analysis and reservoir characterization. This study applies the wavelet transformation, a recent advance in signal analysis techniques, to interpret cyclicity, determine its controlling factors, and detect zone boundaries. We tested the cyclostratigraphic assessments using well log and core data from a well in a fluvio-eolian sequence in the Ormskirk Sandstone, Irish Sea. The boundary detection technique was tested using log data from 10 wells in the Apiay field, Colombia. We processed the wavelet coefficients for each zone of the Ormskirk Formation and determined the wavelengths of the strongest cyclicities. Comparing these periodicities with Milankovitch cycles, we found a strong correspondence of the two. This suggests that climate exercised an important control on depositional cyclicity, as had been concluded in previous studies of the Ormskirk Sandstone. The wavelet coefficients from the log data in the Apiay field were combined to form features. These vectors were used in conjunction with pattern recognition techniques to perform detection in 7 boundaries. For the upper two units, the boundary was detected within 10 feet of their actual depth, in 90% of the wells. The mean detection performance in the Apiay field is 50%. We compared our method with other traditional techniques which do not focus on selecting optimal features for boundary identification. Those methods resulted in detection performances of 40% for the uppermost boundary, which lag behind the 90% performance of our method. Automated determination of geologic boundaries will expedite studies, and knowledge of the controlling deposition factors will enhance stratigraphic and reservoir characterization models. We expect that automated boundary detection and cyclicity analysis will prove to be valuable and time-saving methods for establishing correlations and their

  15. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    SciTech Connect

    Kim, Sang Dong

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  16. On the collocation methods for singular integral equations with Hilbert kernel

    NASA Astrophysics Data System (ADS)

    Du, Jinyuan

    2009-06-01

    In the present paper, we introduce some singular integral operators, singular quadrature operators and discretization matrices of singular integral equations with Hilbert kernel. These results both improve the classical theory of singular integral equations and develop the theory of singular quadrature with Hilbert kernel. Then by using them a unified framework for various collocation methods of numerical solutions of singular integral equations with Hilbert kernel is given. Under the framework, it is very simple and obvious to obtain the coincidence theorem of collocation methods, then the existence and convergence for constructing approximate solutions are also given based on the coincidence theorem.

  17. The double exponential sinc collocation method for singular Sturm-Liouville problems

    NASA Astrophysics Data System (ADS)

    Gaudreau, P.; Slevinsky, R.; Safouhi, H.

    2016-04-01

    Sturm-Liouville problems are abundant in the numerical treatment of scientific and engineering problems. In the present contribution, we present an efficient and highly accurate method for computing eigenvalues of singular Sturm-Liouville boundary value problems. The proposed method uses the double exponential formula coupled with sinc collocation method. This method produces a symmetric positive-definite generalized eigenvalue system and has exponential convergence rate. Numerical examples are presented and comparisons with single exponential sinc collocation method clearly illustrate the advantage of using the double exponential formula.

  18. Wavelet Regularization Per Nullspace Shuttle

    NASA Astrophysics Data System (ADS)

    Charléty, J.; Nolet, G.; Sigloch, K.; Voronin, S.; Loris, I.; Simons, F. J.; Daubechies, I.; Judd, S.

    2010-12-01

    Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization in wavelet space is a promising strategy to deal with the very heterogeneous data coverage in the Earth without sacrificing detail in the solution where this is resolved (see Loris et al., abstract this session). However, L1-norm minimizations are nonlinear, and pose problems of convergence speed when applied to large data sets. In an effort to speed up computations we investigate the application of the nullspace shuttle (Deal and Nolet, GJI 1996). The nullspace shuttle is a filter that adds components from the nullspace to the minimum norm solution so as to have the model satisfy additional conditions not imposed by the data. In our case, the nullspace shuttle projects the model on a truncated basis of wavelets. The convergence of this strategy is unproven, in contrast to algorithms using Landweber iteration or one of its variants, but initial computations using a very large data base give reason for optimism. We invert 430,554 P delay times measured by cross-correlation in different frequency windows. The data are dominated by observations with US Array, leading to a major discrepancy in the resolution beneath North America and the rest of the world. This is a subset of the data set inverted by Sigloch et al (Nature Geosci, 2008), excluding only a small number of ISC delays at short distance and all amplitude data. The model is a cubed Earth model with 3,637,248 voxels spanning mantle and crust, with a resolution everywhere better than 70 km, to which 1912 event corrections are added. In each iteration we determine the optimal solution by a least squares inversion with minimal damping, after which we regularize the model in wavelet space. We then compute the residual data vector (after an intermediate scaling step), and solve for a model correction until a satisfactory chi-square fit for the truncated model is obtained. We present our final results on convergence as well as a

  19. Seamless multiresolution isosurfaces using wavelets

    SciTech Connect

    Udeshi, T.; Hudson, R.; Papka, M. E.

    2000-04-11

    Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.

  20. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    ERIC Educational Resources Information Center

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…

  1. The Relationship between Experiential Learning Styles and the Immediate and Delayed Retention of English Collocations among EFL Learners

    ERIC Educational Resources Information Center

    Mohammadzadeh, Afsaneh

    2012-01-01

    This study was carried out to find out if there was any significant difference in learning English collocations by learning with different dominant experiential learning styles. Seventy-five participants took part in the study in which they were taught a series of English collocations. The entry knowledge of the participants with regard to…

  2. Formulaic Language and Collocations in German Essays: From Corpus-Driven Data to Corpus-Based Materials

    ERIC Educational Resources Information Center

    Krummes, Cedric; Ensslin, Astrid

    2015-01-01

    Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…

  3. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…

  4. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  5. Fast multi-scale edge detection algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zang, Jie; Song, Yanjun; Li, Shaojuan; Luo, Guoyun

    2011-11-01

    The traditional edge detection algorithms have certain noise amplificat ion, making there is a big error, so the edge detection ability is limited. In analysis of the low-frequency signal of image, wavelet analysis theory can reduce the time resolution; under high time resolution for high-frequency signal of the image, it can be concerned about the transient characteristics of the signal to reduce the frequency resolution. Because of the self-adaptive for signal, the wavelet transform can ext ract useful informat ion from the edge of an image. The wavelet transform is at various scales, wavelet transform of each scale provides certain edge informat ion, so called mult i-scale edge detection. Multi-scale edge detection is that the original signal is first polished at different scales, and then detects the mutation of the original signal by the first or second derivative of the polished signal, and the mutations are edges. The edge detection is equivalent to signal detection in different frequency bands after wavelet decomposition. This article is use of this algorithm which takes into account both details and profile of image to detect the mutation of the signal at different scales, provided necessary edge information for image analysis, target recognition and machine visual, and achieved good results.

  6. Directional dual-tree complex wavelet packet transforms for processing quadrature signals.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2016-03-01

    Quadrature signals containing in-phase and quadrature-phase components are used in many signal processing applications in every field of science and engineering. Specifically, Doppler ultrasound systems used to evaluate cardiovascular disorders noninvasively also result in quadrature format signals. In order to obtain directional blood flow information, the quadrature outputs have to be preprocessed using methods such as asymmetrical and symmetrical phasing filter techniques. These resultant directional signals can be employed in order to detect asymptomatic embolic signals caused by small emboli, which are indicators of a possible future stroke, in the cerebral circulation. Various transform-based methods such as Fourier and wavelet were frequently used in processing embolic signals. However, most of the times, the Fourier and discrete wavelet transforms are not appropriate for the analysis of embolic signals due to their non-stationary time-frequency behavior. Alternatively, discrete wavelet packet transform can perform an adaptive decomposition of the time-frequency axis. In this study, directional discrete wavelet packet transforms, which have the ability to map directional information while processing quadrature signals and have less computational complexity than the existing wavelet packet-based methods, are introduced. The performances of proposed methods are examined in detail by using single-frequency, synthetic narrow-band, and embolic quadrature signals. PMID:25388779

  7. Wavelet Neural Network Using Multiple Wavelet Functions in Target Threat Assessment

    PubMed Central

    Guo, Lihong; Duan, Hong

    2013-01-01

    Target threat assessment is a key issue in the collaborative attack. To improve the accuracy and usefulness of target threat assessment in the aerial combat, we propose a variant of wavelet neural networks, MWFWNN network, to solve threat assessment. How to select the appropriate wavelet function is difficult when constructing wavelet neural network. This paper proposes a wavelet mother function selection algorithm with minimum mean squared error and then constructs MWFWNN network using the above algorithm. Firstly, it needs to establish wavelet function library; secondly, wavelet neural network is constructed with each wavelet mother function in the library and wavelet function parameters and the network weights are updated according to the relevant modifying formula. The constructed wavelet neural network is detected with training set, and then optimal wavelet function with minimum mean squared error is chosen to build MWFWNN network. Experimental results show that the mean squared error is 1.23 × 10−3, which is better than WNN, BP, and PSO_SVM. Target threat assessment model based on the MWFWNN has a good predictive ability, so it can quickly and accurately complete target threat assessment. PMID:23509436

  8. Wavelet analysis of electron-density maps.

    PubMed

    Main, P; Wilson, J

    2000-05-01

    The wavelet transform is a powerful technique in signal processing and image analysis and it is shown here that wavelet analysis of low-resolution electron-density maps has the potential to increase their resolution. Like Fourier analysis, wavelet analysis expresses the image (electron density) in terms of a set of orthogonal functions. In the case of the Fourier transform, these functions are sines and cosines and each one contributes to the whole of the image. In contrast, the wavelet functions (simply called wavelets) can be quite localized and may only contribute to a small part of the image. This gives control over the amount of detail added to the map as the resolution increases. The mathematical details are outlined and an algorithm which achieves a resolution increase from 10 to 7 A using a knowledge of the wavelet-coefficient histograms, electron-density histogram and the observed structure amplitudes is described. These histograms are calculated from the electron density of known structures, but it seems likely that the histograms can be predicted, just as electron-density histograms are at high resolution. The results show that the wavelet coefficients contain the information necessary to increase the resolution of electron-density maps. PMID:10771431

  9. Application of wavelets to automatic target recognition

    NASA Astrophysics Data System (ADS)

    Stirman, Charles

    1995-03-01

    'Application of Wavelets to Automatic Target Recognition,' is the second phase of multiphase project to insert compactly supported wavelets into an existing or near-term Department of Defense system such as the Longbow fire control radar for the Apache Attack Helicopter. In this contract, we have concentrated mainly on the classifier function. During the first phase of the program ('Application of Wavelets to Radar Data Processing'), the feasibility of using wavelets to process high range resolution profile (HRRP) amplitude returns from a wide bandwidth radar system was demonstrated. This phase obtained fully polarized wide bandwidth radar HRRP amplitude returns and processed, them with wavelet and wavelet packet or (best basis) transforms. Then, by mathematically defined nonlinear feature selection, we showed that significant improvements in the probability of correct classification are possible, up to 14 percentage points maximum (4 percentage points average) improvement when compared to the current classifier performance. In addition, we addressed the feasibility of using wavelet packets' best basis to address target registration, man made object rejection, clutter discriminations, and synthetic aperture radar scene speckle removal and object registration.

  10. Applications of a fast, continuous wavelet transform

    SciTech Connect

    Dress, W.B.

    1997-02-01

    A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.

  11. Electroencephalographic compression based on modulated filter banks and wavelet transform.

    PubMed

    Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2011-01-01

    Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality. PMID:22255966

  12. Seismic porosity mapping in the Ekofisk Field using a new form of collocated cokriging

    SciTech Connect

    Doyen, P.M.; Boer, L.D. den; Pillet, W.R.

    1996-12-31

    An important practical problem in the geosciences is the integration of seismic attribute information in subsurface mapping applications. The aim is to utilize a more densely sampled secondary variable such as seismic impedance to guide the interpolation of a related primary variable such as porosity. The collocated cokriging technique was recently introduced to facilitate the integration process. Here we propose a simplified implementation of collocated cokriging based on a Bayesian updating rule. We demonstrate that the cokriging estimate at one point can be obtained by direct updating of the kriging estimate with the collocated secondary data. The linear update only requires knowledge of the kriging variance and the coefficient(s) of correlation between primary and secondary variables. No cokriging system need be solved and no reference to spatial cross-covariances is required. The new form of collocated cokriging is applied to predict the lateral variations of porosity in a reservoir layer of the Ekofisk Field, Norwegian North Sea. A cokriged porosity map is obtained by combining zone average porosity data at more than one hundred wells and acoustic impedance information extracted from a 3-D seismic survey. Utilization of the seismic information yields a more detailed and reliable image of the porosity distribution along the flanks of the producing structure.

  13. Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers

    ERIC Educational Resources Information Center

    Sadeghi, Karim

    2009-01-01

    Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…

  14. Your Participation Is "Greatly/Highly" Appreciated: Amplifier Collocations in L2 English

    ERIC Educational Resources Information Center

    Edmonds, Amanda; Gudmestad, Aarnes

    2014-01-01

    The current study sets out to investigate collocational knowledge for a set of 13 English amplifiers among native and nonnative speakers of English, by providing a partial replication of one of the projects reported on in Granger (1998). The project combines both phraseological and distributional approaches to research into formulaic language to…

  15. The Role of Language for Thinking and Task Selection in EFL Learners' Oral Collocational Production

    ERIC Educational Resources Information Center

    Wang, Hung-Chun; Shih, Su-Chin

    2011-01-01

    This study investigated how English as a foreign language (EFL) learners' types of language for thinking and types of oral elicitation tasks influence their lexical collocational errors in speech. Data were collected from 42 English majors in Taiwan using two instruments: (1) 3 oral elicitation tasks and (2) an inner speech questionnaire. The…

  16. Evaluating Remotely-Sensed Surface Soil Moisture Estimates Using Triple Collocation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent work has demonstrated the potential of enhancing remotely-sensed surface soil moisture validation activities through the application of triple collocation techniques which compare time series of three mutually independent geophysical variable estimates in order to acquire the root-mean-square...

  17. Incorporating Corpus Technology to Facilitate Learning of English Collocations in a Thai University EFL Writing Course

    ERIC Educational Resources Information Center

    Chatpunnarangsee, Kwanjira

    2013-01-01

    The purpose of this study is to explore ways of incorporating web-based concordancers for the purpose of teaching English collocations. A mixed-methods design utilizing a case study strategy was employed to uncover four specific dimensions of corpus use by twenty-four students in two classroom sections of a writing course at a university in…

  18. Verb-Noun Collocations in Second Language Writing: A Corpus Analysis of Learners' English

    ERIC Educational Resources Information Center

    Laufer, Batia; Waldman, Tina

    2011-01-01

    The present study investigates the use of English verb-noun collocations in the writing of native speakers of Hebrew at three proficiency levels. For this purpose, we compiled a learner corpus that consists of about 300,000 words of argumentative and descriptive essays. For comparison purposes, we selected LOCNESS, a corpus of young adult native…

  19. Explicit and Implicit Lexical Knowledge: Acquisition of Collocations under Different Input Conditions

    ERIC Educational Resources Information Center

    Sonbul, Suhad; Schmitt, Norbert

    2013-01-01

    To date, there has been little empirical research exploring the relationship between implicit and explicit lexical knowledge (of collocations). As a first step in addressing this gap, two laboratory experiments were conducted that evaluate different conditions (enriched, enhanced, and decontextualized) under which both adult native speakers…

  20. Collocational Processing in Light of the Phraseological Continuum Model: Does Semantic Transparency Matter?

    ERIC Educational Resources Information Center

    Gyllstad, Henrik; Wolter, Brent

    2016-01-01

    The present study investigates whether two types of word combinations (free combinations and collocations) differ in terms of processing by testing Howarth's Continuum Model based on word combination typologies from a phraseological tradition. A visual semantic judgment task was administered to advanced Swedish learners of English (n = 27) and…

  1. Strategies in Translating Collocations in Religious Texts from Arabic into English

    ERIC Educational Resources Information Center

    Dweik, Bader S.; Shakra, Mariam M. Abu

    2010-01-01

    The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…

  2. Investigation of Native Speaker and Second Language Learner Intuition of Collocation Frequency

    ERIC Educational Resources Information Center

    Siyanova-Chanturia, Anna; Spina, Stefania

    2015-01-01

    Research into frequency intuition has focused primarily on native (L1) and, to a lesser degree, nonnative (L2) speaker intuitions about single word frequency. What remains a largely unexplored area is L1 and L2 intuitions about collocation (i.e., phrasal) frequency. To bridge this gap, the present study aimed to answer the following question: How…

  3. Frequent Collocates and Major Senses of Two Prepositions in ESL and ENL Corpora

    ERIC Educational Resources Information Center

    Nkemleke, Daniel

    2009-01-01

    This contribution assesses in quantitative terms frequent collocates and major senses of "between" and "through" in the corpus of Cameroonian English (CCE), the corpus of East-African (Kenya and Tanzania) English which is part of the International Corpus of English (ICE) project (ICE-EA), and the London Oslo/Bergen (LOB) corpus of British English.…

  4. Utilizing Lexical Data from a Web-Derived Corpus to Expand Productive Collocation Knowledge

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Witten, Ian H.; Franken, Margaret

    2010-01-01

    Collocations are of great importance for second language learners, and a learner's knowledge of them plays a key role in producing language fluently (Nation, 2001: 323). In this article we describe and evaluate an innovative system that uses a Web-derived corpus and digital library software to produce a vast concordance and present it in a way…

  5. The Effect of Corpus-Based Activities on Verb-Noun Collocations in EFL Classes

    ERIC Educational Resources Information Center

    Ucar, Serpil; Yükselir, Ceyhun

    2015-01-01

    This current study sought to reveal the impacts of corpus-based activities on verb-noun collocation learning in EFL classes. This study was carried out on two groups--experimental and control groups- each of which consists of 15 students. The students were preparatory class students at School of Foreign Languages, Osmaniye Korkut Ata University.…

  6. A Collocation Method for Volterra Integral Equations with Diagonal and Boundary Singularities

    NASA Astrophysics Data System (ADS)

    Kolk, Marek; Pedas, Arvet; Vainikko, Gennadi

    2009-08-01

    We propose a smoothing technique associated with piecewise polynomial collocation methods for solving linear weakly singular Volterra integral equations of the second kind with kernels which, in addition to a diagonal singularity, may have a singularity at the initial point of the interval of integration.

  7. On alternative wavelet reconstruction formula: a case study of approximate wavelets.

    PubMed

    Lebedeva, Elena A; Postnikov, Eugene B

    2014-10-01

    The application of the continuous wavelet transform to the study of a wide class of physical processes with oscillatory dynamics is restricted by large central frequencies owing to the admissibility condition. We propose an alternative reconstruction formula for the continuous wavelet transform, which is applicable even if the admissibility condition is violated. The case of the transform with the standard reduced Morlet wavelet, which is an important example of such analysing functions, is discussed. PMID:26064533

  8. Wavelet Applications for Flight Flutter Testing

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.

    1999-01-01

    Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.

  9. FOPEN ultrawideband SAR imaging by wavelet interpolation

    NASA Astrophysics Data System (ADS)

    Guo, Hanwei; Liang, Diannong; Wang, Yan; Huang, Xiaotao; Dong, Zhen

    2003-09-01

    Wave number Domain Imaging algorithm can deal with the problem of foliage-penetrating ultra-wide band synthesis aperture radar (FOPEN UWB SAR) imaging. Stolt interpolation is a key role in Imaging Algorithm and is unevenly interpolation problem. There is no fast computation algorithm on Stolt interpolation. In this paper, A novel 4-4 tap of integer wavelet filters is used as Stolt interpolation base function. A fast interpolation algorithm is put forwards to. There is only plus and shift operation in wavelet interpolation that is easy to realize by hardware. The real data are processed to prove the wavelet interpolation valid for FOPEN UWB SAR imaging.

  10. Wavelet frames and admissibility in higher dimensions

    SciTech Connect

    Fuehr, H.

    1996-12-01

    This paper is concerned with the relations between discrete and continuous wavelet transforms on {ital k}-dimensional Euclidean space. We start with the construction of continuous wavelet transforms with the help of square-integrable representations of certain semidirect products, thereby generalizing results of Bernier and Taylor. We then turn to frames of L{sup 2}({bold R}{sup {ital k}}) and to the question, when the functions occurring in a given frame are admissible for a given continuous wavelet transform. For certain frames we give a characterization which generalizes a result of Daubechies to higher dimensions. {copyright} {ital 1996 American Institute of Physics.}

  11. Transionospheric signal detection with chirped wavelets

    SciTech Connect

    Doser, A.B.; Dunham, M.E.

    1997-11-01

    Chirped wavelets are utilized to detect dispersed signals in the joint time scale domain. Specifically, pulses that become dispersed by transmission through the ionosphere and are received by satellites as nonlinear chirps are investigated. Since the dispersion greatly lowers the signal to noise ratios, it is difficult to isolate the signals in the time domain. Satellite data are examined with discrete wavelet expansions. Detection is accomplished via a template matching threshold scheme. Quantitative experimental results demonstrate that the chirped wavelet detection scheme is successful in detecting the transionospheric pulses at very low signal to noise ratios.

  12. Wavelet-based vector quantization for high-fidelity compression and fast transmission of medical images.

    PubMed

    Mitra, S; Yang, S; Kustov, V

    1998-11-01

    Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058

  13. A novel approach for removing ECG interferences from surface EMG signals using a combined ANFIS and wavelet.

    PubMed

    Abbaspour, Sara; Fallah, Ali; Lindén, Maria; Gholamhosseini, Hamid

    2016-02-01

    In recent years, the removal of electrocardiogram (ECG) interferences from electromyogram (EMG) signals has been given large consideration. Where the quality of EMG signal is of interest, it is important to remove ECG interferences from EMG signals. In this paper, an efficient method based on a combination of adaptive neuro-fuzzy inference system (ANFIS) and wavelet transform is proposed to effectively eliminate ECG interferences from surface EMG signals. The proposed approach is compared with other common methods such as high-pass filter, artificial neural network, adaptive noise canceller, wavelet transform, subtraction method and ANFIS. It is found that the performance of the proposed ANFIS-wavelet method is superior to the other methods with the signal to noise ratio and relative error of 14.97dB and 0.02 respectively and a significantly higher correlation coefficient (p<0.05). PMID:26643795

  14. [Spatio-Temporal Bioelectrical Brain Activity Organization during Reading Syntagmatic and Paradigmatic Collocations by Students with Different Foreign Language Proficiency].

    PubMed

    Sokolova, L V; Cherkasova, A S

    2015-01-01

    Texts or words/pseudowords are often used as stimuli for human verbal activity research. Our study pays attention to decoding processes of grammatical constructions consisted of two-three words--collocations. Russian and English collocation sets without any narrative were presented to Russian-speaking students with different English language skill. Stimulus material had two types of collocations: paradigmatic and syntagmatic. 30 students (average age--20.4 ± 0.22) took part in the study, they were divided into two equal groups depending on their English language skill (linguists/nonlinguists). During reading brain bioelectrical activity of cortex has been registered from 12 electrodes in alfa-, beta-, theta-bands. Coherent function reflecting cooperation of different cortical areas during reading collocations has been analyzed. Increase of interhemispheric and diagonal connections while reading collocations in different languages in the group of students with low knowledge of foreign language testifies of importance of functional cooperation between the hemispheres. It has been found out that brain bioelectrical activity of students with good foreign language knowledge during reading of all collocation types in Russian and English is characterized by economization of nervous substrate resources compared to nonlinguists. Selective activation of certain cortical areas has also been observed (depending on the grammatical construction type) in nonlinguists group that is probably related to special decoding system which processes presented stimuli. Reading Russian paradigmatic constructions by nonlinguists entailed increase between left cortical areas, reading of English syntagmatic collocations--between right ones. PMID:26859985

  15. Wavelets: the Key to Intermittent Information?

    NASA Astrophysics Data System (ADS)

    Silverman, B. W.; Vassilicos, J. C.

    2000-08-01

    In recent years there has been an explosion of interest in wavelets, in a wide range of fields in science and engineering and beyond. This book brings together contributions from researchers from disparate fields, both in order to demonstrate to a wide readership the current breadth of work in wavelets, and to encourage cross-fertilization of ideas. It demonstrates the genuinely interdisplinary nature of wavelet research and applications. Particular areas covered include turbulence, statistics, time series analysis, signal and image processing, the physiology of vision, astronomy, economics and acoustics. Some of the work uses standard wavelet approaches and in other cases new methodology is developed. The papers were originally presented at a Royal Society Discussion Meeting, to a large and enthusiastic audience of specialists and non-specialists.

  16. Wavelet based recognition for pulsar signals

    NASA Astrophysics Data System (ADS)

    Shan, H.; Wang, X.; Chen, X.; Yuan, J.; Nie, J.; Zhang, H.; Liu, N.; Wang, N.

    2015-06-01

    A signal from a pulsar can be decomposed into a set of features. This set is a unique signature for a given pulsar. It can be used to decide whether a pulsar is newly discovered or not. Features can be constructed from coefficients of a wavelet decomposition. Two types of wavelet based pulsar features are proposed. The energy based features reflect the multiscale distribution of the energy of coefficients. The singularity based features first classify the signals into a class with one peak and a class with two peaks by exploring the number of the straight wavelet modulus maxima lines perpendicular to the abscissa, and then implement further classification according to the features of skewness and kurtosis. Experimental results show that the wavelet based features can gain comparatively better performance over the shape parameter based features not only in the clustering and classification, but also in the error rates of the recognition tasks.

  17. Wavelet Analysis for Acoustic Phased Array

    NASA Astrophysics Data System (ADS)

    Kozlov, Inna; Zlotnick, Zvi

    2003-03-01

    Wavelet spectrum analysis is known to be one of the most powerful tools for exploring quasistationary signals. In this paper we use wavelet technique to develop a new Direction Finding (DF) Algorithm for the Acoustic Phased Array (APA) systems. Utilising multi-scale analysis of libraries of wavelets allows us to work with frequency bands instead of individual frequency of an acoustic source. These frequency bands could be regarded as features extracted from quasistationary signals emitted by a noisy object. For detection, tracing and identification of a sound source in a noisy environment we develop smart algorithm. The essential part of this algorithm is a special interacting procedure of the above-mentioned DF-algorithm and the wavelet-based Identification (ID) algorithm developed in [4]. Significant improvement of the basic properties of a receiving APA pattern is achieved.

  18. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  19. Velocity and Object Detection Using Quaternion Wavelets

    SciTech Connect

    Traversoni, Leonardo; Xu Yi

    2007-09-06

    DStarting from stereoscopic films we detect corresponding objects in both and stablish an epipolar geometry as well as corresponding moving objects are detected and its movement described all using quaternion wavelets and quaternion phase space decomposition.

  20. The wavelet response as a multiscale NDT method.

    PubMed

    Le Gonidec, Y; Conil, F; Gibert, D

    2003-08-01

    We analyze interfaces by using reflected waves in the framework of the wavelet transform. First, we introduce the wavelet transform as an efficient method to detect and characterize a discontinuity in the acoustical impedance profile of a material. Synthetic examples are shown for both an isolated reflector and multiscale clusters of nearby defects. In the second part of the paper we present the wavelet response method as a natural extension of the wavelet transform when the velocity profile to be analyzed can only be remotely probed by propagating wavelets through the medium (instead of being directly convolved as in the wavelet transform). The wavelet response is constituted by the reflections of the incident wavelets on the discontinuities and we show that both transforms are equivalent when multiple scattering is neglected. We end this paper by experimentally applying the wavelet response in an acoustic tank to characterize planar reflectors with finite thicknesses. PMID:12853084

  1. Applications of a fast continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Dress, William B.

    1997-04-01

    A fast, continuous, wavelet transform, justified by appealing to Shannon's sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and from the standard treatment of the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon's sampling theorem lets us view the Fourier transform of the data set as representing the continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time-domain sampling of the signal under analysis. Although more computationally costly and not represented by an orthogonal basis, the inherent flexibility and shift invariance of the frequency-space wavelets are advantageous for certain applications. The method has been applied to forensic audio reconstruction, speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants' heart beats. Audio reconstruction is aided by selection of desired regions in the 2D representation of the magnitude of the transformed signals. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass- spring system by an occupant's beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, different features may be extracted from voice

  2. Trabecular bone texture classification using wavelet leaders

    NASA Astrophysics Data System (ADS)

    Zou, Zilong; Yang, Jie; Megalooikonomou, Vasileios; Jennane, Rachid; Cheng, Erkang; Ling, Haibin

    2016-03-01

    In this paper we propose to use the Wavelet Leader (WL) transformation for studying trabecular bone patterns. Given an input image, its WL transformation is defined as the cross-channel-layer maximum pooling of an underlying wavelet transformation. WL inherits the advantage of the original wavelet transformation in capturing spatial-frequency statistics of texture images, while being more robust against scale and orientation thanks to the maximum pooling strategy. These properties make WL an attractive alternative to replace wavelet transformations which are used for trabecular analysis in previous studies. In particular, in this paper, after extracting wavelet leader descriptors from a trabecular texture patch, we feed them into two existing statistic texture characterization methods, namely the Gray Level Co-occurrence Matrix (GLCM) and the Gray Level Run Length Matrix (GLRLM). The most discriminative features, Energy of GLCM and Gray Level Non-Uniformity of GLRLM, are retained to distinguish two different populations between osteoporotic patients and control subjects. Receiver Operating Characteristics (ROC) curves are used to measure performance of classification. Experimental results on a recently released benchmark dataset show that WL significantly boosts the performance of baseline wavelet transformations by 5% in average.

  3. The Continuous wavelet in airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Liang, X.; Liu, L.

    2013-12-01

    Airborne gravimetry is an efficient method to recover medium and high frequency band of earth gravity over any region, especially inaccessible areas, which can measure gravity data with high accuracy,high resolution and broad range in a rapidly and economical way, and It will play an important role for geoid and geophysical exploration. Filtering methods for reducing high-frequency errors is critical to the success of airborne gravimetry due to Aircraft acceleration determination based on GPS.Tradiontal filters used in airborne gravimetry are FIR,IIR filer and so on. This study recommends an improved continuous wavelet to process airborne gravity data. Here we focus on how to construct the continuous wavelet filters and show their working principle. Particularly the technical parameters (window width parameter and scale parameter) of the filters are tested. Then the raw airborne gravity data from the first Chinese airborne gravimetry campaign are filtered using FIR-low pass filter and continuous wavelet filters to remove the noise. The comparison to reference data is performed to determinate external accuracy, which shows that continuous wavelet filters applied to airborne gravity in this thesis have good performances. The advantages of the continuous wavelet filters over digital filters are also introduced. The effectiveness of the continuous wavelet filters for airborne gravimetry is demonstrated through real data computation.

  4. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  5. Multisensensor Multitemporal Data Fusion Using Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Ghannam, S.; Awadallah, M.; Abbott, A. L.; Wynne, R. H.

    2014-11-01

    Interest in data fusion, for remote-sensing applications, continues to grow due to the increasing importance of obtaining data in high resolution both spatially and temporally. Applications that will benefit from data fusion include ecosystem disturbance and recovery assessment, ecological forecasting, and others. This paper introduces a novel spatiotemporal fusion approach, the wavelet-based Spatiotemporal Adaptive Data Fusion Model (WSAD-FM). This new technique is motivated by the popular STARFM tool, which utilizes lower-resolution MODIS imagery to supplement Landsat scenes using a linear model. The novelty of WSAD-FM is twofold. First, unlike STARFM, this technique does not predict an entire new image in one linear step, but instead decomposes input images into separate "approximation" and "detail" parts. The different portions are fed into a prediction model that limits the effects of linear interpolation among images. Low-spatial-frequency components are predicted by a weighted mixture of MODIS images and low-spatial-frequency components of Landsat images that are neighbors in the temporal domain. Meanwhile, high-spatialfrequency components are predicted by a weighted average of high-spatial-frequency components of Landsat images alone. The second novelty is that the method has demonstrated good performance using only one input Landsat image and a pair of MODIS images. The technique has been tested using several Landsat and MODIS images for a study area from Central North Carolina (WRS-2 path/row 16/35 in Landsat and H/V11/5 in MODIS), acquired in 2001. NDVI images that were calculated from the study area were used as input to the algorithm. The technique was tested experimentally by predicting existing Landsat images, and we obtained R2 values in the range 0.70 to 0.92 for estimated Landsat images in the red band, and 0.62 to 0.89 for estimated NDVI images.

  6. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  7. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  8. Mining wavelet transformed boiler data sets

    NASA Astrophysics Data System (ADS)

    Letsche, Terry Lee

    Accurate combustion models provide information that allows increased boiler efficiency optimization, saving money and resources while reducing waste. Boiler combustion processes are noted for being complex, nonstationary and nonlinear. While numerous methods have been used to model boiler processes, data driven approaches reflect actual operating conditions within a particular boiler and do not depend on idealized, complex, or expensive empirical models. Boiler and combustion processes vary in time, requiring a denoising technique that preserves the temporal and frequency nature of the data. Moving average, a common technique, smoothes data---low frequency noise is not removed. This dissertation examines models built with wavelet denoising techniques that remove low and high frequency noise in both time and frequency domains. The denoising process has a number of parameters, including choice of wavelet, threshold value, level of wavelet decomposition, and disposition of attributes that appear to be significant at multiple thresholds. A process is developed to experimentally evaluate the predictive accuracy of these models and compares this result against two benchmarks. The first research hypothesis compares the performance of these wavelet denoised models to the model generated from the original data. The second research hypothesis compares the performance of the models generated with this denoising approach to the most effective model generated from a moving average process. In both experiments it was determined that the Daubechies 4 wavelet was a better choice than the more typically chosen Haar wavelet, wavelet packet decomposition outperforms other levels of wavelet decomposition, and discarding all but the lowest threshold repeating attributes produces superior results. The third research hypothesis examined using a two-dimensional wavelet transform on the data. Another parameter for handling the boundary condition was introduced. In the two-dimensional case

  9. Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform

    PubMed Central

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2016-01-01

    Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques. PMID:27043570

  10. Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform.

    PubMed

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2016-01-01

    Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques. PMID:27043570

  11. A 1D wavelet filtering for ultrasound images despeckling

    NASA Astrophysics Data System (ADS)

    Dahdouh, Sonia; Dubois, Mathieu; Frenoux, Emmanuelle; Osorio, Angel

    2010-03-01

    Ultrasound images appearance is characterized by speckle, shadows, signal dropout and low contrast which make them really difficult to process and leads to a very poor signal to noise ratio. Therefore, for main imaging applications, a denoising step is necessary to apply successfully medical imaging algorithms on such images. However, due to speckle statistics, denoising and enhancing edges on these images without inducing additional blurring is a real challenging problem on which usual filters often fail. To deal with such problems, a large number of papers are working on B-mode images considering that the noise is purely multiplicative. Making such an assertion could be misleading, because of internal pre-processing such as log compression which are done in the ultrasound device. To address those questions, we designed a novel filtering method based on 1D Radiofrequency signal. Indeed, since B-mode images are initially composed of 1D signals and since the log compression made by ultrasound devices modifies noise statistics, we decided to filter directly the 1D Radiofrequency signal envelope before log compression and image reconstitution, in order to conserve as much information as possible. A bi-orthogonal wavelet transform is applied to the log transform of each signal and an adaptive 1D split and merge like algorithm is used to denoise wavelet coefficients. Experiments were carried out on synthetic data sets simulated with Field II simulator and results show that our filter outperforms classical speckle filtering methods like Lee, non-linear means or SRAD filters.

  12. Multiparameter radar analysis using wavelets

    NASA Astrophysics Data System (ADS)

    Tawfik, Ben Bella Sayed

    Multiparameter radars have been used in the interpretation of many meteorological phenomena. Rainfall estimates can be obtained from multiparameter radar measurements. Studying and analyzing spatial variability of different rainfall algorithms, namely R(ZH), the algorithm based on reflectivity, R(ZH, ZDR), the algorithm based on reflectivity and differential reflectivity, R(KDP), the algorithm based on specific differential phase, and R(KDP, Z DR), the algorithm based on specific differential phase and differential reflectivity, are important for radar applications. The data used in this research were collected using CSU-CHILL, CP-2, and S-POL radars. In this research multiple objectives are addressed using wavelet analysis namely, (1)space time variability of various rainfall algorithms, (2)separation of convective and stratiform storms based on reflectivity measurements, (3)and detection of features such as bright bands. The bright band is a multiscale edge detection problem. In this research, the technique of multiscale edge detection is applied on the radar data collected using CP-2 radar on August 23, 1991 to detect the melting layer. In the analysis of space/time variability of rainfall algorithms, wavelet variance introduces an idea about the statistics of the radar field. In addition, multiresolution analysis of different rainfall estimates based on four algorithms, namely R(ZH), R( ZH, ZDR), R(K DP), and R(KDP, Z DR), are analyzed. The flood data of July 29, 1997 collected by CSU-CHILL radar were used for this analysis. Another set of S-POL radar data collected on May 2, 1997 at Wichita, Kansas were used as well. At each level of approximation, the detail and the approximation components are analyzed. Based on this analysis, the rainfall algorithms can be judged. From this analysis, an important result was obtained. The Z-R algorithms that are widely used do not show the full spatial variability of rainfall. In addition another intuitively obvious result

  13. Numerical approximation of Lévy-Feller fractional diffusion equation via Chebyshev-Legendre collocation method

    NASA Astrophysics Data System (ADS)

    Sweilam, N. H.; Abou Hasan, M. M.

    2016-08-01

    This paper reports a new spectral algorithm for obtaining an approximate solution for the Lévy-Feller diffusion equation depending on Legendre polynomials and Chebyshev collocation points. The Lévy-Feller diffusion equation is obtained from the standard diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative. A new formula expressing explicitly any fractional-order derivatives, in the sense of Riesz-Feller operator, of Legendre polynomials of any degree in terms of Jacobi polynomials is proved. Moreover, the Chebyshev-Legendre collocation method together with the implicit Euler method are used to reduce these types of differential equations to a system of algebraic equations which can be solved numerically. Numerical results with comparisons are given to confirm the reliability of the proposed method for the Lévy-Feller diffusion equation.

  14. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  15. The convergence problem of collocation solutions in the framework of the stochastic interpretation

    NASA Astrophysics Data System (ADS)

    Sansò, F.; Venuti, G.

    2011-01-01

    The problem of the convergence of the collocation solution to the true gravity field was defined long ago (Tscherning in Boll Geod Sci Affini 39:221-252, 1978) and some results were derived, in particular by Krarup (Boll Geod Sci Affini 40:225-240, 1981). The problem is taken up again in the context of the stochastic interpretation of collocation theory and some new results are derived, showing that, when the potential T can be really continued down to a Bjerhammar sphere, we have a quite general convergence property in the noiseless case. When noise is present in data, still reasonable convergence results hold true. "Democrito che 'l mondo a caso pone" "Democritus who made the world stochastic" Dante Alighieri, La Divina Commedia, Inferno, IV - 136

  16. A space-time spectral collocation algorithm for the variable order fractional wave equation.

    PubMed

    Bhrawy, A H; Doha, E H; Alzaidy, J F; Abdelkawy, M A

    2016-01-01

    The variable order wave equation plays a major role in acoustics, electromagnetics, and fluid dynamics. In this paper, we consider the space-time variable order fractional wave equation with variable coefficients. We propose an effective numerical method for solving the aforementioned problem in a bounded domain. The shifted Jacobi polynomials are used as basis functions, and the variable-order fractional derivative is described in the Caputo sense. The proposed method is a combination of shifted Jacobi-Gauss-Lobatto collocation scheme for the spatial discretization and the shifted Jacobi-Gauss-Radau collocation scheme for temporal discretization. The aforementioned problem is then reduced to a problem consists of a system of easily solvable algebraic equations. Finally, numerical examples are presented to show the effectiveness of the proposed numerical method. PMID:27536504

  17. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  18. Raman lidar profiling of atmospheric water vapor: Simultaneous measurements with two collocated systems

    NASA Technical Reports Server (NTRS)

    Goldsmith, J. E. M.; Bisson, Scott E.; Ferrare, Richard A.; Evans, Keith D.; Whiteman, David N.; Melfi, S. H.

    1994-01-01

    Raman lidar is a leading candidate for providing the detailed space- and time-resolved measurements of water vapor needed by a variety of atmospheric studies. Simultaneous measurements of atmospheric water vapor are described using two collocated Raman lidar systems. These lidar systems, developed at the NASA/Goddard Space Flight Center and Sandia National Laboratories, acquired approximately 12 hours of simultaneous water vapor data during three nights in November 1992 while the systems were collocated at the Goddard Space Flight Center. Although these lidar systems differ substantially in their design, measured water vapor profiles agreeed within 0.15 g/kg between altitudes of 1 and 5 km. Comparisons with coincident radiosondes showed all instruments agreed within 0.2 g/kg in this same altitude range. Both lidars also clearly showed the advection of water vapor in the middle troposphere and the pronounced increase in water vapor in the nocturnal boundary layer that occurred during one night.

  19. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  20. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  1. The rational Chebyshev of second kind collocation method for solving a class of astrophysics problems

    NASA Astrophysics Data System (ADS)

    Parand, K.; Khaleqi, S.

    2016-02-01

    The Lane-Emden equation has been used to model several phenomena in theoretical physics, mathematical physics and astrophysics such as the theory of stellar structure. This study is an attempt to utilize the collocation method with the rational Chebyshev function of Second kind (RCS) to solve the Lane-Emden equation over the semi-infinite interval [0,+∞[ . According to well-known results and comparing with previous methods, it can be said that this method is efficient and applicable.

  2. Design and Application of a Collocated Capacitance Sensor for Magnetic Bearing Spindle

    NASA Technical Reports Server (NTRS)

    Shin, Dongwon; Liu, Seon-Jung; Kim, Jongwon

    1996-01-01

    This paper presents a collocated capacitance sensor for magnetic bearings. The main feature of the sensor is that it is made of a specific compact printed circuit board (PCB). The signal processing unit has been also developed. The results of the experimental performance evaluation on the sensitivity, resolution and frequency response of the sensor are presented. Finally, an application example of the sensor to the active control of a magnetic bearing is described.

  3. Quadratic spline collocation and parareal deferred correction method for parabolic PDEs

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, Yan; Li, Rongjian

    2016-06-01

    In this paper, we consider a linear parabolic PDE, and use optimal quadratic spline collocation (QSC) methods for the space discretization, proceed the parareal technique on the time domain. Meanwhile, deferred correction technique is used to improve the accuracy during the iterations. The error estimation is presented and the stability is analyzed. Numerical experiments, which is carried out on a parallel computer with 40 CPUs, are attached to exhibit the effectiveness of the hybrid algorithm.

  4. Spurious Modes in Spectral Collocation Methods with Two Non-Periodic Directions

    NASA Technical Reports Server (NTRS)

    Balachandar, S.; Madabhushi, Ravi K.

    1992-01-01

    Collocation implementation of the Kleiser-Schumann's method in geometries with two non-periodic directions is shown to suffer from three spurious modes - line, column and checkerboard - contaminating the computed pressure field. The corner spurious modes are also present but they do not affect evaluation of pressure related quantities. A simple methodology in the inversion of the influence matrix will efficiently filter out these spurious modes.

  5. Collocation and integration of reprocessing and repositories: implications for aqueous flowsheets and waste management

    SciTech Connect

    Forsberg, C.; Lewis, L.

    2013-07-01

    It is an accident of history that the current model of the fuel cycle is a separate set of facilities connected by transportation. The question is whether collocation and integration of reprocessing and fuel fabrication with the repository significantly reduce the costs of a closed fuel cycle while improving system performance in terms of safety and long-term repository performance. This paper examines the question in terms of higher-level functional requirements of reprocessing systems and geological repositories.

  6. Numerical solutions of the reaction diffusion system by using exponential cubic B-spline collocation algorithms

    NASA Astrophysics Data System (ADS)

    Ersoy, Ozlem; Dag, Idris

    2015-12-01

    The solutions of the reaction-diffusion system are given by method of collocation based on the exponential B-splines. Thus the reaction-diffusion systemturns into an iterative banded algebraic matrix equation. Solution of the matrix equation is carried out byway of Thomas algorithm. The present methods test on both linear and nonlinear problems. The results are documented to compare with some earlier studies by use of L∞ and relative error norm for problems respectively.

  7. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    SciTech Connect

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  8. A fourth order spline collocation approach for a business cycle model

    NASA Astrophysics Data System (ADS)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  9. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  10. Active vibration control of a sandwich plate by non-collocated positive position feedback

    NASA Astrophysics Data System (ADS)

    Ferrari, Giovanni; Amabili, Marco

    2015-04-01

    The active vibration control of a free rectangular sandwich plate by using the Positive Position Feedback (PPF) algorithm was experimentally investigated in a previous study. Four normal modes were controlled by four nearly collocated couples of piezoelectric sensors and actuators. The experimental results of the control showed some limitation, especially in the Multi-Input Multi-Output (MIMO) configuration. This was attributed to the specific type of sensors and their conditioning, as well as to the phase shifts present in the vibration at different points of the structure. An alternative approach is here undertaken by abandoning the configuration of quasi-perfect collocation between sensor and actuator. The positioning of the piezoelectric patches is still led by the strain energy value distribution on the plate; each couple of sensor and actuator is now placed on the same face of the plate but in two distinct positions, opposed and symmetrical with respect to the geometric center of the plate. Single-Input Single-Output (SISO) PPF is tested and the transfer function parameters of the controller are tuned according to the measured values of modal damping. Then the participation matrices necessary for the MIMO control algorithm are determined by means of a completely experimental procedure. PPF is able to mitigate the vibration of the first four natural modes, in spite of the rigid body motions due to the free boundary conditions. The amplitude reduction achieved with the non-collocated configuration is much larger than the one obtained with the nearby collocated one. The phase lags were addressed in the MIMO algorithm by correction phase delays, further increasing the performance of the controller.

  11. The simulation of far-field wavelets using frequency-domain air-gun array near-field wavelets

    NASA Astrophysics Data System (ADS)

    Song, Jian-Guo; Deng, Yong; Tong, Xin-Xin

    2013-12-01

    Air-gun arrays are used in marine-seismic exploration. Far-field wavelets in subsurface media represent the stacking of single air-gun ideal wavelets. We derived single air-gun ideal wavelets using near-field wavelets recorded from near-field geophones and then synthesized them into far-field wavelets. This is critical for processing wavelets in marineseismic exploration. For this purpose, several algorithms are currently used to decompose and synthesize wavelets in the time domain. If the traveltime of single air-gun wavelets is not an integral multiple of the sampling interval, the complex and error-prone resampling of the seismic signals using the time-domain method is necessary. Based on the relation between the frequency-domain phase and the time-domain time delay, we propose a method that first transforms the real near-field wavelet to the frequency domain via Fourier transforms; then, it decomposes it and composes the wavelet spectrum in the frequency domain, and then back transforms it to the time domain. Thus, the resampling problem is avoided and single air-gun wavelets and far-field wavelets can be reliably derived. The effect of ghost reflections is also considered, while decomposing the wavelet and removing the ghost reflections. Modeling and real data processing were used to demonstrate the feasibility of the proposed method.

  12. Wavelet analysis deformation monitoring data of high-speed railway bridge

    NASA Astrophysics Data System (ADS)

    Tang, ShiHua; Huang, Qing; Zhou, Conglin; Xu, HongWei; Liu, YinTao; Li, FeiDa

    2015-12-01

    Deformation monitoring data of high-speed railway bridges will inevitably be affected because of noise pollution, A deformation monitoring point of high-speed railway bridge was measurd by using sokkia SDL30 electronic level for a long time,which got a large number of deformation monitoring data. Based on the characteristics of the deformation monitoring data of high-speed railway bridge, which contain lots of noise. Based on the MATLAB software platform, 120 groups of deformation monitoring data were applied to analysis of wavelet denoising.sym6,db6 wavelet basis function were selected to analyze and remove the noise.The original signal was broken into three layers wavelet,which contain high frequency coefficients and low frequency coefficients.However, high frequency coefficient have plenty of noise.Adaptive method of soft and hard threshold were used to handle in the high frequency coefficient.Then,high frequency coefficient that was removed much of noise combined with low frequency coefficient to reconstitute and obtain reconstruction wavelet signal.Root Mean Square Error (RMSE) and Signal-To-Noise Ratio (SNR) were regarded as evaluation index of denoising,The smaller the root mean square error and the greater signal-to-noise ratio indicate that them have a good effect in denoising. We can surely draw some conclusions in the experimental analysis:the db6 wavelet basis function has a good effect in wavelet denoising by using a adaptive soft threshold method,which root mean square error is minimum and signal-to-noise ratio is maximum.Moreover,the reconstructed image are more smooth than original signal denoising after wavelet denoising, which removed noise and useful signal are obtained in the original signal.Compared to the other three methods, this method has a good effect in denoising, which not only retain useful signal in the original signal, but aiso reach the goal of removing noise. So, it has a strong practical value in a actual deformation monitoring

  13. Wavelet phase estimation using ant colony optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Shangxu; Yuan, Sanyi; Ma, Ming; Zhang, Rui; Luo, Chunmei

    2015-11-01

    Eliminating seismic wavelet is important in seismic high-resolution processing. However, artifacts may arise in seismic interpretation when the wavelet phase is inaccurately estimated. Therefore, we propose a frequency-dependent wavelet phase estimation method based on the ant colony optimization (ACO) algorithm with global optimization capacity. The wavelet phase can be optimized with the ACO algorithm by fitting nearby-well seismic traces with well-log data. Our proposed method can rapidly produce a frequency-dependent wavelet phase and optimize the seismic-to-well tie, particularly for weak signals. Synthetic examples demonstrate the effectiveness of the proposed ACO-based wavelet phase estimation method, even in the presence of a colored noise. Real data example illustrates that seismic deconvolution using an optimum mixed-phase wavelet can provide more information than that using an optimum constant-phase wavelet.

  14. Wavelet transforms as solutions of partial differential equations

    SciTech Connect

    Zweig, G.

    1997-10-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuous wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.

  15. A novel stochastic collocation method for uncertainty propagation in complex mechanical systems

    NASA Astrophysics Data System (ADS)

    Qi, WuChao; Tian, SuMei; Qiu, ZhiPing

    2015-02-01

    This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.

  16. Polyvinylidene fluoride film sensors in collocated feedback structural control: application for suppressing impact-induced disturbances.

    PubMed

    Ma, Chien-Ching; Chuang, Kuo-Chih; Pan, Shan-Ying

    2011-12-01

    Polyvinylidene fluoride (PVDF) films are light, flexible, and have high piezoelectricity. Because of these advantages, they have been widely used as sensors in applications such as underwater investigation, nondestructive damage detection, robotics, and active vibration suppression. PVDF sensors are especially preferred over conventional strain gauges in active vibration control because the PVDF sensors are easy to cut into different sizes or shapes as piezoelectric actuators and they can then be placed as collocated pairs. In this work, to focus on demonstrating the dynamic sensing performance of the PVDF film sensor, we revisit the active vibration control problem of a cantilever beam using a collocated lead zirconate titanate (PZT) actuator/PVDF film sensor pair. Before applying active vibration control, the measurement characteristics of the PVDF film sensor are studied by simultaneous comparison with a strain gauge. The loading effect of the piezoelectric actuator on the cantilever beam is also investigated in this paper. Finally, four simple, robust active vibration controllers are employed with the collocated PZT/PVDF pair to suppress vibration of the cantilever beam subjected to impact loadings. The four controllers are the velocity feedback controller, the integral resonant controller (IRC), the resonant controller, and the positive position feedback (PPF) controller. Suppression of impact disturbances is especially suitable for the purpose of demonstrating the dynamic sensing performance of the PVDF sensor. The experimental results also provide suggestions for choosing between the previously mentioned controllers, which have been proven to be effective in suppressing impact-induced vibrations. PMID:23443690

  17. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    NASA Technical Reports Server (NTRS)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  18. Wavelet-based denoising method for real phonocardiography signal recorded by mobile devices in noisy environment.

    PubMed

    Gradolewski, Dawid; Redlarski, Grzegorz

    2014-09-01

    The main obstacle in development of intelligent autodiagnosis medical systems based on the analysis of phonocardiography (PCG) signals is noise. The noise can be caused by digestive and respiration sounds, movements or even signals from the surrounding environment and it is characterized by wide frequency and intensity spectrum. This spectrum overlaps the heart tones spectrum, which makes the problem of PCG signal filtrating complex. The most common method for filtering such signals are wavelet denoising algorithms. In previous studies, in order to determine the optimum wavelet denoising parameters the disturbances were simulated by Gaussian white noise. However, this paper shows that this noise has a variable character. Therefore, the purpose of this paper is adaptation of a wavelet denoising algorithm for the filtration of real PCG signal disturbances from signals recorded by a mobile devices in a noisy environment. The best results were obtained for Coif 5 wavelet at the 10th decomposition level with the use of a minimaxi threshold selection algorithm and mln rescaling function. The performance of the algorithm was tested on four pathological heart sounds: early systolic murmur, ejection click, late systolic murmur and pansystolic murmur. PMID:25038586

  19. Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density p, allowing us to obtain the natural non-negative density representation (p)2. Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. PMID:18390355

  20. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  1. Generalizing Lifted Tensor-Product Wavelets to Irregular Polygonal Domains

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2002-04-11

    We present a new construction approach for symmetric lifted B-spline wavelets on irregular polygonal control meshes defining two-manifold topologies. Polygonal control meshes are recursively refined by stationary subdivision rules and converge to piecewise polynomial limit surfaces. At every subdivision level, our wavelet transforms provide an efficient way to add geometric details that are expanded from wavelet coefficients. Both wavelet decomposition and reconstruction operations are based on local lifting steps and have linear-time complexity.

  2. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470

  3. Composite wavelet representations for reconstruction of missing data

    NASA Astrophysics Data System (ADS)

    Czaja, Wojciech; Dobrosotskaya, Julia; Manning, Benjamin

    2013-05-01

    We shall introduce a novel methodology for data reconstruction and recovery, based on composite wavelet representations. These representations include shearlets and crystallographic wavelets, among others, and they allow for an increased directional sensitivity in comparison with the standard multiscale techniques. Our new approach allows us to recover missing data, due to sparsity of composite wavelet representations, especially when compared to inpainting algorithms induced by traditional wavelet representations, and also due to the flexibility of our variational approach.

  4. Undecimated Wavelet Transforms for Image De-noising

    SciTech Connect

    Gyaourova, A; Kamath, C; Fodor, I K

    2002-11-19

    A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.

  5. Wavelet processing techniques for digital mammography

    NASA Astrophysics Data System (ADS)

    Laine, Andrew F.; Song, Shuwu

    1992-09-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  6. Pseudo-Gabor wavelet for face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Xudong; Liu, Wentao; Lam, Kin-Man

    2013-04-01

    An efficient face-recognition algorithm is proposed, which not only possesses the advantages of linear subspace analysis approaches-such as low computational complexity-but also has the advantage of a high recognition performance with the wavelet-based algorithms. Based on the linearity of Gabor-wavelet transformation and some basic assumptions on face images, we can extract pseudo-Gabor features from the face images without performing any complex Gabor-wavelet transformations. The computational complexity can therefore be reduced while a high recognition performance is still maintained by using the principal component analysis (PCA) method. The proposed algorithm is evaluated based on the Yale database, the Caltech database, the ORL database, the AR database, and the Facial Recognition Technology database, and is compared with several different face recognition methods such as PCA, Gabor wavelets plus PCA, kernel PCA, locality preserving projection, and dual-tree complex wavelet transformation plus PCA. Experiments show that consistent and promising results are obtained.

  7. Proximity sensing with wavelet generated video

    NASA Astrophysics Data System (ADS)

    Noel, Steven E.; Szu, Harold H.

    1998-10-01

    In this paper we introduce wavelet video processing of proximity sensor signals. Proximity sensing is required for a wide range of military and commercial applications, including weapon fuzzing, robotics, and automotive collision avoidance. While our proposed method temporarily increases signal dimension, it eventually performs data compression through the extraction of salient signal features. This data compression in turn reduces the necessary complexity of the remaining computational processing. We demonstrate our method of wavelet video processing via the proximity sensing of nearby objects through their Doppler shift. In doing this we perform a continuous wavelet transform on the Doppler signal, after subjecting it to a time-varying window. We then extract signal features from the resulting wavelet video, which we use as input to pattern recognition neural networks. The networks are trained to estimate the time- varying Doppler shift from the extracted features. We test the estimation performance of the networks, using different degrees of nonlinearity in the frequency shift over time and different levels of noise. We give the analytical result that the signal-to-noise enhancement of our proposed method is at least as good as the square root of the number of video frames, although more work is needed to completely quantify this. Real-time wavelet-based video processing and compression technology recently developed under the DOD WAVENET program offers an exciting opportunity to more fully investigate our proposed method.

  8. Segmentation of dermoscopy images using wavelet networks.

    PubMed

    Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeed; Gheissari, Niloofar; Mokhtari, Mojgan; Kolahdouzan, Farzaneh

    2013-04-01

    This paper introduces a new approach for the segmentation of skin lesions in dermoscopic images based on wavelet network (WN). The WN presented here is a member of fixed-grid WNs that is formed with no need of training. In this WN, after formation of wavelet lattice, determining shift and scale parameters of wavelets with two screening stage and selecting effective wavelets, orthogonal least squares algorithm is used to calculate the network weights and to optimize the network structure. The existence of two stages of screening increases globality of the wavelet lattice and provides a better estimation of the function especially for larger scales. R, G, and B values of a dermoscopy image are considered as the network inputs and the network structure formation. Then, the image is segmented and the skin lesions exact boundary is determined accordingly. The segmentation algorithm were applied to 30 dermoscopic images and evaluated with 11 different metrics, using the segmentation result obtained by a skilled pathologist as the ground truth. Experimental results show that our method acts more effectively in comparison with some modern techniques that have been successfully used in many medical imaging problems. PMID:23193305

  9. Wavelet based detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Gur, Berke M.; Niezrecki, Christopher

    2005-04-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.

  10. Wavelet formulation of the polarizable continuum model. II. Use of piecewise bilinear boundary elements.

    PubMed

    Bugeanu, Monica; Di Remigio, Roberto; Mozgawa, Krzysztof; Reine, Simen Sommerfelt; Harbrecht, Helmut; Frediani, Luca

    2015-12-21

    The simplicity of dielectric continuum models has made them a standard tool in almost any Quantum Chemistry (QC) package. Despite being intuitive from a physical point of view, the actual electrostatic problem at the cavity boundary is challenging: the underlying boundary integral equations depend on singular, long-range operators. The parametrization of the cavity boundary should be molecular-shaped, smooth and differentiable. Even the most advanced implementations, based on the integral equation formulation (IEF) of the polarizable continuum model (PCM), generally lead to working equations which do not guarantee convergence to the exact solution and/or might become numerically unstable in the limit of large refinement of the molecular cavity (small tesserae). This is because they generally make use of a surface parametrization with cusps (interlocking spheres) and employ collocation methods for the discretization (point charges). Wavelets on a smooth cavity are an attractive alternative to consider: for the operators involved, they lead to highly sparse matrices and precise error control. Moreover, by making use of a bilinear basis for the representation of operators and functions on the cavity boundary, all equations can be differentiated to enable the computation of geometrical derivatives. In this contribution, we present our implementation of the IEFPCM with bilinear wavelets on a smooth cavity boundary. The implementation has been carried out in our module PCMSolver and interfaced with LSDalton, demonstrating the accuracy of the method both for the electrostatic solvation energy and for linear response properties. In addition, the implementation in a module makes our framework readily available to any QC software with minimal effort. PMID:26256401

  11. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  12. Wavelet extractor: A Bayesian well-tie and wavelet extraction program

    NASA Astrophysics Data System (ADS)

    Gunning, James; Glinsky, Michael E.

    2006-06-01

    We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.

  13. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  14. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  15. Image encryption using the fractional wavelet transform

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Useche, J.; Torres, C. O.; Mattos, L.

    2011-01-01

    In this paper a technique for the coding of digital images is developed using Fractional Wavelet Transform (FWT) and random phase masks (RPMs). The digital image to encrypt is transformed with the FWT, after the coefficients resulting from the FWT (Approximation, Details: Horizontal, vertical and diagonal) are multiplied each one by different RPMs (statistically independent) and these latest results is applied an Inverse Wavelet Transform (IWT), obtaining the encrypted digital image. The decryption technique is the same encryption technique in reverse sense. This technique provides immediate advantages security compared to conventional techniques, in this technique the mother wavelet family and fractional orders associated with the FWT are additional keys that make access difficult to information to an unauthorized person (besides the RPMs used), thereby the level of encryption security is extraordinarily increased. In this work the mathematical support for the use of the FWT in the computational algorithm for the encryption is also developed.

  16. Wavelet transform in electrocardiography--data compression.

    PubMed

    Provazník, I; Kozumplík, J

    1997-06-01

    An application of the wavelet transform to electrocardiography is described in the paper. The transform is used as a first stage of a lossy compression algorithm for efficient coding of rest ECG signals. The proposed technique is based on the decomposition of the ECG signal into a set of basic functions covering the time-frequency domain. Thus, non-stationary character of ECG data is considered. Some of the time-frequency signal components are removed because of their low influence to signal characteristics. Resulting components are efficiently coded by quantization, composition into a sequence of coefficients and compression by a run-length coder and a entropic Huffman coder. The proposed wavelet-based compression algorithm can compress data to average code length about 1 bit/sample. The algorithm can be also implemented to a real-time processing system when wavelet transform is computed by fast linear filters described in the paper. PMID:9291025

  17. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  18. Nature's statistical symmetries, a characterization by wavelets.

    SciTech Connect

    Davis, A. B.

    2001-01-01

    Wavelets are the mathematical equivalent of a microscope, a means of looking at more or less detail in data. By applying wavelet transforms to remote sensing data (satellite images, atmospheric profiles, etc.), we can discover symmetries in Nature's ways of changing in lime and displaying a highly variable environment at any given time. These symmetries are not exact but statistical. The most intriguing one is 'scale-invariance' which describes how spatial statistics collected over a wide range of scales (using wave1m)follow simple power laws with respect to the scale parameter. The geometrical counterparts of statistical scale-invariance are the random fractals so often observed in Nature. This wavelet-based exploration of natural symmetry will be illustrated with clouds,

  19. Wavelet-assisted volume ray casting.

    PubMed

    He, T

    1998-01-01

    Volume rendering is an important technique for computational biology. In this paper we propose a new wavelet-assisted volume ray casting algorithm. The main idea is to use the wavelet coefficients for detecting the local frequency, and decide the appropriate sampling rate along the ray according to the maximum frequency. Our algorithm is to first apply the 3D discrete wavelet transform on the volume, then create an index volume to indicate the necessary sampling distance at each voxel. During ray casting, the original volume is traversed in the spatial domain, while the index volume is used to decide the appropriate sampling distance. We demonstrate that our algorithm provides a framework for approximating the volume rendering at different levels of quality in a rapid and controlled way. PMID:9697179

  20. A wavelet Galerkin method employing B-spline bases for solid mechanics problems without the use of a fictitious domain

    NASA Astrophysics Data System (ADS)

    Tanaka, Satoyuki; Okada, Hiroshi; Okazawa, Shigenobu

    2012-07-01

    This study develops a wavelet Galerkin method (WGM) that uses B-spline wavelet bases for application to solid mechanics problems. A fictitious domain is often adopted to treat general boundaries in WGMs. In the analysis, the body is extended to its exterior but very low stiffness is applied to the exterior region. The stiffness matrix in the WGM becomes singular without the use of a fictitious domain. The problem arises from the lack of linear independence of the basis functions. A technique to remove basis functions that can be represented by the superposition of the other basis functions is proposed. The basis functions are automatically eliminated in the pre conditioning step. An adaptive strategy is developed using the proposed technique. The solution is refined by superposing finer wavelet functions. Numerical examples of solid mechanics problems are presented to demonstrate the multiresolution properties of the WGM.

  1. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-11-01

    Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  2. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    PubMed Central

    Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Abstract. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  3. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform.

    PubMed

    Chitchian, Shahab; Mayer, Markus A; Boretsky, Adam R; van Kuijk, Frederik J; Motamedi, Massoud

    2012-11-01

    ABSTRACT. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  4. Characterization and simulation of gunfire with wavelets

    SciTech Connect

    Smallwood, D.O.

    1998-09-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The response of a structure to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The methods all used some form of the discrete fourier transform. The current paper will explore a simpler method to describe the nonstationary random process in terms of a wavelet transform. As was done previously, the gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. The wavelet transform is performed on each of these records. The mean and standard deviation of the resulting wavelet coefficients describe the composite characteristics of the entire waveform. It is shown that the distribution of the wavelet coefficients is approximately Gaussian with a nonzero mean and that the standard deviation of the coefficients at different times and levels are approximately independent. The gunfire is simulated by generating realizations of records of a single-round firing by computing the inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously discussed gunfire record. The individual realizations are then assembled into a realization of a time history of many rounds firing. A second-order correction of the probability density function (pdf) is accomplished with a zero memory nonlinear (ZMNL) function. The method is straightforward, easy to implement, and produces a simulated record very much like the original measured gunfire record.

  5. Wavelet analysis applied to the IRAS cirrus

    NASA Technical Reports Server (NTRS)

    Langer, William D.; Wilson, Robert W.; Anderson, Charles H.

    1994-01-01

    The structure of infrared cirrus clouds is analyzed with Laplacian pyramid transforms, a form of non-orthogonal wavelets. Pyramid and wavelet transforms provide a means to decompose images into their spatial frequency components such that all spatial scales are treated in an equivalent manner. The multiscale transform analysis is applied to IRAS 100 micrometer maps of cirrus emission in the north Galactic pole region to extract features on different scales. In the maps we identify filaments, fragments and clumps by separating all connected regions. These structures are analyzed with respect to their Hausdorff dimension for evidence of the scaling relationships in the cirrus clouds.

  6. Analysis of wavelet technology for NASA applications

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.

  7. Wavelet encoding and variable resolution progressive transmission

    NASA Technical Reports Server (NTRS)

    Blanford, Ronald P.

    1993-01-01

    Progressive transmission is a method of transmitting and displaying imagery in stages of successively improving quality. The subsampled lowpass image representations generated by a wavelet transformation suit this purpose well, but for best results the order of presentation is critical. Candidate data for transmission are best selected using dynamic prioritization criteria generated from image contents and viewer guidance. We show that wavelets are not only suitable but superior when used to encode data for progressive transmission at non-uniform resolutions. This application does not preclude additional compression using quantization of highpass coefficients, which to the contrary results in superior image approximations at low data rates.

  8. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  9. Wavelet analysis of 'double quasar' flux data

    NASA Astrophysics Data System (ADS)

    Hjorth, P. G.; Villemoes, L. F.; Teuber, J.; Florentin-Nielsen, R.

    1992-02-01

    We have used a wavelet transform method to extract time delay information from the light curves of the gravitationally lensed quasar 0957+561 A,B. The time-frequency performance of wavelet transforms is different from that of, e.g., windowed Fourier transforms in allowing a better temporal resolution and localization of the multiple scales of the signal. It is shown that the discrepancies between the time delays derived by different authors may in part be ascribed to the choice of reduction method.

  10. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  11. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  12. Mass spectrometry cancer data classification using wavelets and genetic algorithm.

    PubMed

    Nguyen, Thanh; Nahavandi, Saeid; Creighton, Douglas; Khosravi, Abbas

    2015-12-21

    This paper introduces a hybrid feature extraction method applied to mass spectrometry (MS) data for cancer classification. Haar wavelets are employed to transform MS data into orthogonal wavelet coefficients. The most prominent discriminant wavelets are then selected by genetic algorithm (GA) to form feature sets. The combination of wavelets and GA yields highly distinct feature sets that serve as inputs to classification algorithms. Experimental results show the robustness and significant dominance of the wavelet-GA against competitive methods. The proposed method therefore can be applied to cancer classification models that are useful as real clinical decision support systems for medical practitioners. PMID:26611346

  13. Wavelet-based detection of transients in biological signals

    NASA Astrophysics Data System (ADS)

    Mzaik, Tahsin; Jagadeesh, Jogikal M.

    1994-10-01

    This paper presents two multiresolution algorithms for detection and separation of mixed signals using the wavelet transform. The first algorithm allows one to design a mother wavelet and its associated wavelet grid that guarantees the separation of signal components if information about the expected minimum signal time and frequency separation of the individual components is known. The second algorithm expands this idea to design two mother wavelets which are then combined to achieve the required separation otherwise impossible with a single wavelet. Potential applications include many biological signals such as ECG, EKG, and retinal signals.

  14. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  15. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027

  16. Improvement of PWF filter using wavelet thresholding for polarimetric SAR imagery

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Smara, Y.; Fadel, H.; Bouguessa, N.

    2011-10-01

    The images acquired by polarimetric SAR radar systems are characterized by the presence of a noise named speckle. This noise, have a multiplicative nature, corrompt at the same time the amplitude and the phase which complicates the data interpretation, degrades the performance of segmentation and reduces the targets detectability. From where need to pretreate images by adapted filtering methods, before carrying out their analysis. In this article, we study the polarimetric wightening filter PWF of Novak and Burl which treats the polarimetric covariance matrix to produce a filtered intensity image. We propose two methods to improve the PWF filter: the first integrates the technique of Lee edge detection to improve the filter performance and detect fine details of the image. This method is called LSDPWF (Lee Structure Detection PWF). After detecting the edges, we filter the detected regions in the polarimetric channels by the PWF filter. The second combines the method of filtering by wavelet thresholding with PWF filter using the stationary wavelet transform SWT. This method is called EPWF (Enhanced PWF). In the wavelet thresholding, we use the soft thresholding which sets to zero the amplitudes of coefficients that are below a certain threshold. So we propose to extend the wavelet thresholding, to apply it in polarimetric SAR images and use the polarimetric information to calculate the threshold on the wavelet coefficients. We implemented these filters and applied them to RADARSAT-2 polarimetric images taken on the areas of Algiers, Algeria. A visual and statistical evaluation and a comparative study are performed. The performance evaluation of each filter is based on smoothing homogeneous areas and preserving edges.

  17. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  18. Quantum dynamics and electronic spectroscopy within the framework of wavelets

    NASA Astrophysics Data System (ADS)

    Toutounji, Mohamad

    2013-03-01

    This paper serves as a first-time report on formulating important aspects of electronic spectroscopy and quantum dynamics in condensed harmonic systems using the framework of wavelets, and a stepping stone to our future work on developing anharmonic wavelets. The Morlet wavelet is taken to be the mother wavelet for the initial state of the system of interest. This work reports daughter wavelets that may be used to study spectroscopy and dynamics of harmonic systems. These wavelets are shown to arise naturally upon optical electronic transition of the system of interest. Natural birth of basis (daughter) wavelets emerging on exciting an electronic two-level system coupled, both linearly and quadratically, to harmonic phonons is discussed. It is shown that this takes place through using the unitary dilation and translation operators, which happen to be part of the time evolution operator of the final electronic state. The corresponding optical autocorrelation function and linear absorption spectra are calculated to test the applicability and correctness of the herein results. The link between basis wavelets and the Liouville space generating function is established. An anharmonic mother wavelet is also proposed in the case of anharmonic electron-phonon coupling. A brief description of deriving anharmonic wavelets and the corresponding anharmonic Liouville space generating function is explored. In conclusion, a mother wavelet (be it harmonic or anharmonic) which accounts for Duschinsky mixing is suggested.

  19. Some uses of wavelets for imaging dynamic processes in live cochlear structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, J.

    2007-09-01

    A variety of image and signal processing algorithms based on wavelet filtering tools have been developed during the last few decades, that are well adapted to the experimental variability typically encountered in live biological microscopy. A number of processing tools are reviewed, that use wavelets for adaptive image restoration and for motion or brightness variation analysis by optical flow computation. The usefulness of these tools for biological imaging is illustrated in the context of the restoration of images of the inner ear and the analysis of cochlear motion patterns in two and three dimensions. I also report on recent work that aims at capturing fluorescence intensity changes associated with vesicle dynamics at synaptic zones of sensory hair cells. This latest application requires one to separate the intensity variations associated with the physiological process under study from the variations caused by motion of the observed structures. A wavelet optical flow algorithm for doing this is presented, and its effectiveness is demonstrated on artificial and experimental image sequences.

  20. The analysis of unsteady wind turbine data using wavelet techniques

    SciTech Connect

    Slepski, J.E.; Kirchhoff, R.H.

    1995-09-01

    Wavelet analysis employs a relatively new technique which decomposes a signal into wavelets of finite length. A wavelet map is generated showing the distribution of signal variance in both the time and frequency domain. The first section of this paper begins with an introduction to wavelet theory, contrasting it to standard Fourier analysis. Some simple applications to the processing of harmonic signals are then given. Since wind turbines operate under unsteady stochastic loads, the time series of most machine parameters are non-stationary; wavelet analysis can be applied to this problem. In the second section of this paper, wavelet methods are used to examine data from Phase 2 of the NREL Combined Experiment. Data analyzed includes airfoil surface pressure, and low speed shaft torque. In each case the wavelet map offers valuable insight that could not be made without it.

  1. Understanding wavelet analysis and filters for engineering applications

    NASA Astrophysics Data System (ADS)

    Parameswariah, Chethan Bangalore

    Wavelets are signal-processing tools that have been of interest due to their characteristics and properties. Clear understanding of wavelets and their properties are a key to successful applications. Many theoretical and application-oriented papers have been written. Yet the choice of a right wavelet for a given application is an ongoing quest that has not been satisfactorily answered. This research has successfully identified certain issues, and an effort has been made to provide an understanding of wavelets by studying the wavelet filters in terms of their pole-zero and magnitude-phase characteristics. The magnitude characteristics of these filters have flat responses in both the pass band and stop band. The phase characteristics are almost linear. It is interesting to observe that some wavelets have the exact same magnitude characteristics but their phase responses vary in the linear slopes. An application of wavelets for fast detection of the fault current in a transformer and distinguishing from the inrush current clearly shows the advantages of the lower slope and fewer coefficients---Daubechies wavelet D4 over D20. This research has been published in the IEEE transactions on Power systems and is also proposed as an innovative method for protective relaying techniques. For detecting the frequency composition of the signal being analyzed, an understanding of the energy distribution in the output wavelet decompositions is presented for different wavelet families. The wavelets with fewer coefficients in their filters have more energy leakage into adjacent bands. The frequency bandwidth characteristics display flatness in the middle of the pass band confirming that the frequency of interest should be in the middle of the frequency band when performing a wavelet transform. Symlets exhibit good flatness with minimum ripple but the transition regions do not have sharper cut off. The number of wavelet levels and their frequency ranges are dependent on the two

  2. Information retrieval system utilizing wavelet transform

    DOEpatents

    Brewster, Mary E.; Miller, Nancy E.

    2000-01-01

    A method for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.

  3. Wavelet based image quality self measurements

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2010-04-01

    Noise in general is considered to be degradation in image quality. Moreover image quality is measured based on the appearance of the image edges and their clarity. Most of the applications performance is affected by image quality and level of different types of degradation. In general measuring image quality and identifying the type of noise or degradation is considered to be a key factor in raising the applications performance, this task can be very challenging. Wavelet transform now a days, is widely used in different applications. These applications are mostly benefiting from the wavelet localisation in the frequency domain. The coefficients of the high frequency sub-bands in wavelet domain are represented by Laplace histogram. In this paper we are proposing to use the Laplace distribution histogram to measure the image quality and also to identify the type of degradation affecting the given image. Image quality and the level of degradation are mostly measured using a reference image with reasonable quality. The discussed Laplace distribution histogram provides a self testing measurement for the quality of the image. This measurement is based on constructing the theoretical Laplace distribution histogram of the high frequency wavelet sub-band. This construction is based on the actual standard deviation, then to be compared with the actual Laplace distribution histogram. The comparison is performed using histogram intersection method. All the experiments are performed using the extended Yale database.

  4. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGESBeta

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  5. Wavelet transforms for detecting microcalcifications in mammograms

    SciTech Connect

    Strickland, R.N.; Hahn, H.I.

    1996-04-01

    Clusters of fine, granular microcalcifications in mammograms may be an early sign of disease. Individual grains are difficult to detect and segment due to size and shape variability and because the background mammogram texture is typically inhomogeneous. The authors develop a two-stage method based on wavelet transforms for detecting and segmenting calcifications. The first stage is based on an undecimated wavelet transform, which is simply the conventional filter bank implementation without downsampling, so that the low-low (LL), low-high (LH), high-low (HL), and high-high (HH) sub-bands remain at full size. Detection takes place in HH and the combination LH + HL. Four octaves are compared with two inter-octave voices for finer scale resolution. By appropriate selection of the wavelet basis the detection of microcalcifications in the relevant size range can be nearly optimized. In fact, the filters which transform the input image into HH and LH + HL are closely related to prewhitening matched filters for detecting Gaussian objects (idealized microcalcifications) in two common forms of Markov (background) noise. The second stage is designed to overcome the limitations of the simplistic Gaussian assumption and provides an accurate segmentation of calcification boundaries. Detected pixel sites in HH and LH + HL are dilated then weighted before computing the inverse wavelet transform. Individual microcalcifications are greatly enhanced in the output image, to the point where straightforward thresholding can be applied to segment them. FROC curves are computed from tests using a freely distributed database of digitized mammograms.

  6. ECG Artifact Removal from Surface EMG Signal Using an Automated Method Based on Wavelet-ICA.

    PubMed

    Abbaspour, Sara; Lindén, Maria; Gholamhosseini, Hamid

    2015-01-01

    This study aims at proposing an efficient method for automated electrocardiography (ECG) artifact removal from surface electromyography (EMG) signals recorded from upper trunk muscles. Wavelet transform is applied to the simulated data set of corrupted surface EMG signals to create multidimensional signal. Afterward, independent component analysis (ICA) is used to separate ECG artifact components from the original EMG signal. Components that correspond to the ECG artifact are then identified by an automated detection algorithm and are subsequently removed using a conventional high pass filter. Finally, the results of the proposed method are compared with wavelet transform, ICA, adaptive filter and empirical mode decomposition-ICA methods. The automated artifact removal method proposed in this study successfully removes the ECG artifacts from EMG signals with a signal to noise ratio value of 9.38 while keeping the distortion of original EMG to a minimum. PMID:25980853

  7. CHARACTERIZING COMPLEXITY IN SOLAR MAGNETOGRAM DATA USING A WAVELET-BASED SEGMENTATION METHOD

    SciTech Connect

    Kestener, P.; Khalil, A.; Arneodo, A.

    2010-07-10

    The multifractal nature of solar photospheric magnetic structures is studied using the two-dimensional wavelet transform modulus maxima (WTMM) method. This relies on computing partition functions from the wavelet transform skeleton defined by the WTMM method. This skeleton provides an adaptive space-scale partition of the fractal distribution under study, from which one can extract the multifractal singularity spectrum. We describe the implementation of a multiscale image processing segmentation procedure based on the partitioning of the WT skeleton, which allows the disentangling of the information concerning the multifractal properties of active regions from the surrounding quiet-Sun field. The quiet Sun exhibits an average Hoelder exponent {approx}-0.75, with observed multifractal properties due to the supergranular structure. On the other hand, active region multifractal spectra exhibit an average Hoelder exponent {approx}0.38, similar to those found when studying experimental data from turbulent flows.

  8. Spectral optical layer properties of cirrus from collocated airborne measurements and simulations

    NASA Astrophysics Data System (ADS)

    Finger, Fanny; Werner, Frank; Klingebiel, Marcus; Ehrlich, André; Jäkel, Evelyn; Voigt, Matthias; Borrmann, Stephan; Spichtinger, Peter; Wendisch, Manfred

    2016-06-01

    Spectral upward and downward solar irradiances from vertically collocated measurements above and below a cirrus layer are used to derive cirrus optical layer properties such as spectral transmissivity, absorptivity, reflectivity, and cloud top albedo. The radiation measurements are complemented by in situ cirrus crystal size distribution measurements and radiative transfer simulations based on the microphysical data. The close collocation of the radiative and microphysical measurements, above, beneath, and inside the cirrus, is accomplished by using a research aircraft (Learjet 35A) in tandem with the towed sensor platform AIRTOSS (AIRcraft TOwed Sensor Shuttle). AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected from two field campaigns over the North Sea and the Baltic Sea in spring and late summer 2013. One measurement flight over the North Sea proved to be exemplary, and as such the results are used to illustrate the benefits of collocated sampling. The radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness τ on cirrus spectral optical layer properties. Furthermore, the radiative effects of low-level, liquid water (warm) clouds as frequently observed beneath the cirrus are evaluated. They may cause changes in the radiative forcing of the cirrus by a factor of 2. When low-level clouds below the cirrus are not taken into account, the radiative cooling effect (caused by reflection of solar radiation) due to the cirrus in the solar (shortwave) spectral range is significantly overestimated.

  9. A meshfree local RBF collocation method for anti-plane transverse elastic wave propagation analysis in 2D phononic crystals

    NASA Astrophysics Data System (ADS)

    Zheng, Hui; Zhang, Chuanzeng; Wang, Yuesheng; Sladek, Jan; Sladek, Vladimir

    2016-01-01

    In this paper, a meshfree or meshless local radial basis function (RBF) collocation method is proposed to calculate the band structures of two-dimensional (2D) anti-plane transverse elastic waves in phononic crystals. Three new techniques are developed for calculating the normal derivative of the field quantity required by the treatment of the boundary conditions, which improve the stability of the local RBF collocation method significantly. The general form of the local RBF collocation method for a unit-cell with periodic boundary conditions is proposed, where the continuity conditions on the interface between the matrix and the scatterer are taken into account. The band structures or dispersion relations can be obtained by solving the eigenvalue problem and sweeping the boundary of the irreducible first Brillouin zone. The proposed local RBF collocation method is verified by using the corresponding results obtained with the finite element method. For different acoustic impedance ratios, various scatterer shapes, scatterer arrangements (lattice forms) and material properties, numerical examples are presented and discussed to show the performance and the efficiency of the developed local RBF collocation method compared to the FEM for computing the band structures of 2D phononic crystals.

  10. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  11. Reducing the entropy production in a collocated Lagrange-Remap scheme

    NASA Astrophysics Data System (ADS)

    Braeunig, Jean-Philippe

    2016-06-01

    The Eulerian scheme described in this article aims to perform efficient and accurate compressible multimaterial fluid flows simulations. We use a second order Collocated Lagrange-Remap scheme based on the EUCCLHYD Lagrangian scheme (Maire et al., 2007, [26]) which is conservative and uses acoustic Riemann solvers. The entropy production is studied and a correction is proposed to improve accuracy in isentropic flows by adding correction fluxes. The scheme is thus kept conservative in mass, momentum and total energy. A VOF PLIC interface reconstruction is added to the scheme. Results are presented that assess the dissipation reduction.

  12. Application of collocated GPS and seismic sensors to earthquake monitoring and early warning.

    PubMed

    Li, Xingxing; Zhang, Xiaohong; Guo, Bofeng

    2013-01-01

    We explore the use of collocated GPS and seismic sensors for earthquake monitoring and early warning. The GPS and seismic data collected during the 2011 Tohoku-Oki (Japan) and the 2010 El Mayor-Cucapah (Mexico) earthquakes are analyzed by using a tightly-coupled integration. The performance of the integrated results is validated by both time and frequency domain analysis. We detect the P-wave arrival and observe small-scale features of the movement from the integrated results and locate the epicenter. Meanwhile, permanent offsets are extracted from the integrated displacements highly accurately and used for reliable fault slip inversion and magnitude estimation. PMID:24284765

  13. Energy stable, collocated high order schemes for incompressible flows on distorted grids

    NASA Astrophysics Data System (ADS)

    Reiss, Julius

    2012-09-01

    An energy preserving finite difference scheme for incompressible, constant density flows is presented. It is building on the idea of the skew-symmetric rewriting of the non-linear transport term. In contrast to former schemes collocated grids can be used, while exactly preserving the energy conservation and still avoiding the odd-even decoupling of the Laplacian. High order derivatives can be utilized. A formulation for curvilinear grids is discussed and strict skew-symmetry and perfect conservation is found for arbitrary transformations in two dimensions and quite general, but not fully general transformations in three dimensions.

  14. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  15. Acoustic ranging of small arms fire using a single sensor node collocated with the target.

    PubMed

    Lo, Kam W; Ferguson, Brian G

    2015-06-01

    A ballistic model-based method, which builds upon previous work by Lo and Ferguson [J. Acoust. Soc. Am. 132, 2997-3017 (2012)], is described for ranging small arms fire using a single acoustic sensor node collocated with the target, without a priori knowledge of the muzzle speed and ballistic constant of the bullet except that they belong to a known two-dimensional parameter space. The method requires measurements of the differential time of arrival and differential angle of arrival of the muzzle blast and ballistic shock wave at the sensor node. Its performance is evaluated using both simulated and real data. PMID:26093450

  16. Legendre spectral-collocation method for solving some types of fractional optimal control problems.

    PubMed

    Sweilam, Nasser H; Al-Ajami, Tamer M

    2015-05-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh-Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  17. Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs

    PubMed Central

    Javadi, H. H. S.; Navidi, H. R.

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  18. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  19. Numerical solution of differential-difference equations in large intervals using a Taylor collocation method

    NASA Astrophysics Data System (ADS)

    Tirani, M. Dadkhah; Sohrabi, F.; Almasieh, H.; Kajani, M. Tavassoli

    2015-10-01

    In this paper, a collocation method based on Taylor polynomials is developed for solving systems linear differential-difference equations with variable coefficients defined in large intervals. By using Taylor polynomials and their properties in obtaining operational matrices, the solution of the differential-difference equation system with given conditions is reduced to the solution of a system of linear algebraic equations. We first divide the large interval into M equal subintervals and then Taylor polynomials solutions are obtained in each interval, separately. Some numerical examples are given and results are compared with analytical solutions and other techniques in the literature to demonstrate the validity and applicability of the proposed method.

  20. A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation

    NASA Technical Reports Server (NTRS)

    Jones, Brandon A.; Anderson, Rodney L.

    2012-01-01

    Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.