Parallel adaptive wavelet collocation method for PDEs
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.
Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.
2010-12-01
Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.
Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony
Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.
Spatially-Anisotropic Parallel Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg V.; Brown-Dymkoski, Eric
2015-11-01
Despite latest advancements in development of robust wavelet-based adaptive numerical methodologies to solve partial differential equations, they all suffer from two major ``curses'': 1) the reliance on rectangular domain and 2) the ``curse of anisotropy'' (i.e. homogeneous wavelet refinement and inability to have spatially varying aspect ratio of the mesh elements). The new method addresses both of these challenges by utilizing an adaptive anisotropic wavelet transform on curvilinear meshes that can be either algebraically prescribed or calculated on the fly using PDE-based mesh generation. In order to ensure accurate representation of spatial operators in physical space, an additional adaptation on spatial physical coordinates is also performed. It is important to note that when new nodes are added in computational space, the physical coordinates can be approximated by interpolation of the existing solution and additional local iterations to ensure that the solution of coordinate mapping PDEs is converged on the new mesh. In contrast to traditional mesh generation approaches, the cost of adding additional nodes is minimal, mainly due to localized nature of iterative mesh generation PDE solver requiring local iterations in the vicinity of newly introduced points. This work was supported by ONR MURI under grant N00014-11-1-069.
NASA Astrophysics Data System (ADS)
Kevlahan, N. N.; Vasilyev, O. V.; Yuen, D. A.
2003-12-01
An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is developed. The method is based on the general class of multi-dimensional second generation wavelets and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems. Wavelet decomposition is used for grid adaptation and interpolation, while O(N) hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the iterative solver, an iterative procedure analogous to the multigrid algorithm is developed. For the problems with slowly varying viscosity simple diagonal preconditioning works. For problems with large laterally varying viscosity contrasts either direct solver on shared-memory machines or multilevel iterative solver with incomplete LU preconditioner may be used. The method is demonstrated for the solution of a number of two-dimensional elliptic test problems with both constant and spatially varying viscosity with multiscale character.
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's
NASA Technical Reports Server (NTRS)
Cai, Wei; Wang, Jian-Zhong
1993-01-01
We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.
Adaptive wavelets and relativistic magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo
2016-03-01
We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.
Szu, H.; Hsu, C.
1996-12-31
Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg V.; Gazzola, Mattia; Koumoutsakos, Petros
2009-11-01
In this talk we discuss preliminary results for the use of hybrid wavelet collocation - Brinkman penalization approach for shape and topology optimization of fluid flows. Adaptive wavelet collocation method tackles the problem of efficiently resolving a fluid flow on a dynamically adaptive computational grid in complex geometries (where grid resolution varies both in space and time time), while Brinkman volume penalization allows easy variation of flow geometry without using body-fitted meshes by simply changing the shape of the penalization region. The use of Brinkman volume penalization approach allow seamless transition from shape to topology optimization by combining it with level set approach and increasing the size of the optimization space. The approach is demonstrated for shape optimization of a variety of fluid flows by optimizing single cost function (time averaged Drag coefficient) using covariance matrix adaptation (CMA) evolutionary algorithm.
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg V.; Gazzola, Mattia; Koumoutsakos, Petros
2010-11-01
In this talk we discuss preliminary results for the use of hybrid wavelet collocation - Brinkman penalization approach for shape optimization for drag reduction in flows past linked bodies. This optimization relies on Adaptive Wavelet Collocation Method along with the Brinkman penalization technique and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Adaptive wavelet collocation method tackles the problem of efficiently resolving a fluid flow on a dynamically adaptive computational grid, while a level set approach is used to describe the body shape and the Brinkman volume penalization allows for an easy variation of flow geometry without requiring body-fitted meshes. We perform 2D simulations of linked bodies in order to investigate whether flat geometries are optimal for drag reduction. In order to accelerate the costly cost function evaluations we exploit the inherent parallelism of ES and we extend the CMA-ES implementation to a multi-host framework. This framework allows for an easy distribution of the cost function evaluations across several parallel architectures and it is not limited to only one computing facility. The resulting optimal shapes are geometrically consistent with the shapes that have been obtained in the pioneering wind tunnel experiments for drag reduction using Evolution Strategies by Ingo Rechenberg.
Adaptive Multilinear Tensor Product Wavelets.
Weiss, Kenneth; Lindstrom, Peter
2016-01-01
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells. PMID:26529742
Volumetric Rendering of Geophysical Data on Adaptive Wavelet Grid
NASA Astrophysics Data System (ADS)
Vezolainen, A.; Erlebacher, G.; Vasilyev, O.; Yuen, D. A.
2005-12-01
Numerical modeling of geological phenomena frequently involves processes across a wide range of spatial and temporal scales. In the last several years, transport phenomena governed by the Navier-Stokes equations have been simulated in wavelet space using second generation wavelets [1], and most recently on fully adaptive meshes. Our objective is to visualize this time-dependent data using volume rendering while capitalizing on the available sparse data representation. We present a technique for volumetric ray casting of multi-scale datasets in wavelet space. Rather of working with the wavelets at the finest possible resolution, we perform a partial inverse wavelet transform as a preprocessing step to obtain scaling functions on a uniform grid at a user-prescribed resolution. As a result, a function in physical space is represented by a superposition of scaling functions on a coarse regular grid and wavelets on an adaptive mesh. An efficient and accurate ray casting algorithm is based just on these scaling functions. Additional detail is added during the ray tracing by taking an appropriate number of wavelets into account based on support overlap with the interpolation point, wavelet amplitude, and other characteristics, such as opacity accumulation (front to back ordering) and deviation from frontal viewing direction. Strategies for hardware implementation will be presented if available, inspired by the work in [2]. We will pressent error measures as a function of the number of scaling and wavelet functions used for interpolation. Data from mantle convection will be used to illustrate the method. [1] Vasilyev, O.V. and Bowman, C., Second Generation Wavelet Collocation Method for the Solution of Partial Differential Equations. J. Comp. Phys., 165, pp. 660-693, 2000. [2] Guthe, S., Wand, M., Gonser, J., and Straßer, W. Interactive rendering of large volume data sets. In Proceedings of the Conference on Visualization '02 (Boston, Massachusetts, October 27 - November
A Haar wavelet collocation method for coupled nonlinear Schrödinger-KdV equations
NASA Astrophysics Data System (ADS)
Oruç, Ömer; Esen, Alaattin; Bulut, Fatih
2016-04-01
In this paper, to obtain accurate numerical solutions of coupled nonlinear Schrödinger-Korteweg-de Vries (KdV) equations a Haar wavelet collocation method is proposed. An explicit time stepping scheme is used for discretization of time derivatives and nonlinear terms that appeared in the equations are linearized by a linearization technique and space derivatives are discretized by Haar wavelets. In order to test the accuracy and reliability of the proposed method L2, L∞ error norms and conserved quantities are used. Also obtained results are compared with previous ones obtained by finite element method, Crank-Nicolson method and radial basis function meshless methods. Error analysis of Haar wavelets is also given.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large
NASA Astrophysics Data System (ADS)
Li, Xinxiu
2012-10-01
Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.
Adapting overcomplete wavelet models to natural images
NASA Astrophysics Data System (ADS)
Sallee, Phil; Olshausen, Bruno A.
2003-11-01
Overcomplete wavelet representations have become increasingly popular for their ability to provide highly sparse and robust descriptions of natural signals. We describe a method for incorporating an overcomplete wavelet representation as part of a statistical model of images which includes a sparse prior distribution over the wavelet coefficients. The wavelet basis functions are parameterized by a small set of 2-D functions. These functions are adapted to maximize the average log-likelihood of the model for a large database of natural images. When adapted to natural images, these functions become selective to different spatial orientations, and they achieve a superior degree of sparsity on natural images as compared with traditional wavelet bases. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Inference with the learned model is demonstrated for applications such as denoising, with results that compare favorably with other methods.
Nonlinear adaptive wavelet analysis of electrocardiogram signals
NASA Astrophysics Data System (ADS)
Yang, H.; Bukkapatnam, S. T.; Komanduri, R.
2007-08-01
Wavelet representation can provide an effective time-frequency analysis for nonstationary signals, such as the electrocardiogram (EKG) signals, which contain both steady and transient parts. In recent years, wavelet representation has been emerging as a powerful time-frequency tool for the analysis and measurement of EKG signals. The EKG signals contain recurring, near-periodic patterns of P , QRS , T , and U waveforms, each of which can have multiple manifestations. Identification and extraction of a compact set of features from these patterns is critical for effective detection and diagnosis of various disorders. This paper presents an approach to extract a fiducial pattern of EKG based on the consideration of the underlying nonlinear dynamics. The pattern, in a nutshell, is a combination of eigenfunctions of the ensembles created from a Poincare section of EKG dynamics. The adaptation of wavelet functions to the fiducial pattern thus extracted yields two orders of magnitude (some 95%) more compact representation (measured in terms of Shannon signal entropy). Such a compact representation can facilitate in the extraction of features that are less sensitive to extraneous noise and other variations. The adaptive wavelet can also lead to more efficient algorithms for beat detection and QRS cancellation as well as for the extraction of multiple classical EKG signal events, such as widths of QRS complexes and QT intervals.
Adaptive wavelet methods - Matrix-vector multiplication
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-12-01
The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.
Wavelet-based adaptive numerical simulation of unsteady 3D flow around a bluff body
NASA Astrophysics Data System (ADS)
de Stefano, Giuliano; Vasilyev, Oleg
2012-11-01
The unsteady three-dimensional flow past a two-dimensional bluff body is numerically simulated using a wavelet-based method. The body is modeled by exploiting the Brinkman volume-penalization method, which results in modifying the governing equations with the addition of an appropriate forcing term inside the spatial region occupied by the obstacle. The volume-penalized incompressible Navier-Stokes equations are numerically solved by means of the adaptive wavelet collocation method, where the non-uniform spatial grid is dynamically adapted to the flow evolution. The combined approach is successfully applied to the simulation of vortex shedding flow behind a stationary prism with square cross-section. The computation is conducted at transitional Reynolds numbers, where fundamental unstable three-dimensional vortical structures exist, by well-predicting the unsteady forces arising from fluid-structure interaction.
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza
The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work
Adaptive wavelets for visual object detection and classification
NASA Astrophysics Data System (ADS)
Aghdasi, Farzin
1997-10-01
We investigate the application of adaptive wavelets for the representation and classification of signals in digitized speech and medical images. A class of wavelet basis functions are used to extract features from the regions of interest. These features are then used in an artificial neural network to classify the region are containing the desired object or belonging to the background clutter. The dilation and shift parameters of the wavelet functions are not fixed. These parameters are included in the training scheme. In this way the wavelets are adaptive to the expected shape and size of the signals. The results indicate that adaptive wavelet functions may outperform the classical fixed wavelet analysis in detection of subtle objects.
NASA Astrophysics Data System (ADS)
Liu, Hong; Mo, Yu L.
1998-08-01
There are many textures such as woven fabrics having repeating Textron. In order to handle the textural characteristics of images with defects, this paper proposes a new method based on 2D wavelet transform. In the method, a new concept of different adaptive wavelet bases is used to match the texture pattern. The 2D wavelet transform has two different adaptive orthonormal wavelet bases for rows and columns which differ from Daubechies wavelet bases. The orthonormal wavelet bases for rows and columns are generated by genetic algorithm. The experiment result demonstrate the ability of the different adaptive wavelet bases to characterize the texture and locate the defects in the texture.
A New Adaptive Mother Wavelet for Electromagnetic Transient Analysis
NASA Astrophysics Data System (ADS)
Guillén, Daniel; Idárraga-Ospina, Gina; Cortes, Camilo
2016-01-01
Wavelet Transform (WT) is a powerful technique of signal processing, its applications in power systems have been increasing to evaluate power system conditions, such as faults, switching transients, power quality issues, among others. Electromagnetic transients in power systems are due to changes in the network configuration, producing non-periodic signals, which have to be identified to avoid power outages in normal operation or transient conditions. In this paper a methodology to develop a new adaptive mother wavelet for electromagnetic transient analysis is proposed. Classification is carried out with an innovative technique based on adaptive wavelets, where filter bank coefficients will be adapted until a discriminant criterion is optimized. Then, its corresponding filter coefficients will be used to get the new mother wavelet, named wavelet ET, which allowed to identify and to distinguish the high frequency information produced by different electromagnetic transients.
NASA Astrophysics Data System (ADS)
Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas
2002-08-01
We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.
An adaptive morphological gradient lifting wavelet for detecting bearing defects
NASA Astrophysics Data System (ADS)
Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng
2012-05-01
This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.
Space-based RF signal classification using adaptive wavelet features
Caffrey, M.; Briles, S.
1995-04-01
RF signals are dispersed in frequency as they propagate through the ionosphere. For wide-band signals, this results in nonlinearly- chirped-frequency, transient signals in the VHF portion of the spectrum. This ionospheric dispersion provide a means of discriminating wide-band transients from other signals (e.g., continuous-wave carriers, burst communications, chirped-radar signals, etc.). The transient nature of these dispersed signals makes them candidates for wavelet feature selection. Rather than choosing a wavelet ad hoc, we adaptively compute an optimal mother wavelet via a neural network. Gaussian weighted, linear frequency modulate (GLFM) wavelets are linearly combined by the network to generate our application specific mother wavelet, which is optimized for its capacity to select features that discriminate between the dispersed signals and clutter (e.g., multiple continuous-wave carriers), not for its ability to represent the dispersed signal. The resulting mother wavelet is then used to extract features for a neutral network classifier. The performance of the adaptive wavelet classifier is the compared to an FFT based neural network classifier.
Adaptive video compressed sampling in the wavelet domain
NASA Astrophysics Data System (ADS)
Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi
2016-07-01
In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.
Big data extraction with adaptive wavelet analysis (Presentation Video)
NASA Astrophysics Data System (ADS)
Qu, Hongya; Chen, Genda; Ni, Yiqing
2015-04-01
Nondestructive evaluation and sensing technology have been increasingly applied to characterize material properties and detect local damage in structures. More often than not, they generate images or data strings that are difficult to see any physical features without novel data extraction techniques. In the literature, popular data analysis techniques include Short-time Fourier Transform, Wavelet Transform, and Hilbert Transform for time efficiency and adaptive recognition. In this study, a new data analysis technique is proposed and developed by introducing an adaptive central frequency of the continuous Morlet wavelet transform so that both high frequency and time resolution can be maintained in a time-frequency window of interest. The new analysis technique is referred to as Adaptive Wavelet Analysis (AWA). This paper will be organized in several sections. In the first section, finite time-frequency resolution limitations in the traditional wavelet transform are introduced. Such limitations would greatly distort the transformed signals with a significant frequency variation with time. In the second section, Short Time Wavelet Transform (STWT), similar to Short Time Fourier Transform (STFT), is defined and developed to overcome such shortcoming of the traditional wavelet transform. In the third section, by utilizing the STWT and a time-variant central frequency of the Morlet wavelet, AWA can adapt the time-frequency resolution requirement to the signal variation over time. Finally, the advantage of the proposed AWA is demonstrated in Section 4 with a ground penetrating radar (GPR) image from a bridge deck, an analytical chirp signal with a large range sinusoidal frequency change over time, the train-induced acceleration responses of the Tsing-Ma Suspension Bridge in Hong Kong, China. The performance of the proposed AWA will be compared with the STFT and traditional wavelet transform.
NASA Astrophysics Data System (ADS)
Man, Jun; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng
2016-06-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a sufficiently large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos expansion (PCE) to represent and propagate the uncertainties in parameters and states. However, PCKF suffers from the so-called "curse of dimensionality". Its computational cost increases drastically with the increasing number of parameters and system nonlinearity. Furthermore, PCKF may fail to provide accurate estimations due to the joint updating scheme for strongly nonlinear models. Motivated by recent developments in uncertainty quantification and EnKF, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected at each assimilation step; the "restart" scheme is utilized to eliminate the inconsistency between updated model parameters and states variables. The performance of RAPCKF is systematically tested with numerical cases of unsaturated flow models. It is shown that the adaptive approach and restart scheme can significantly improve the performance of PCKF. Moreover, RAPCKF has been demonstrated to be more efficient than EnKF with the same computational cost.
An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform
NASA Astrophysics Data System (ADS)
Huang, Xiaosheng; Zhao, Sujuan
At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.
Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph
2008-11-01
This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.
Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms
NASA Astrophysics Data System (ADS)
Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.
2013-02-01
The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.
Multiple cardiac arrhythmia recognition using adaptive wavelet network.
Lin, Chia-Hung; Chen, Pei-Jarn; Chen, Yung-Fu; Lee, You-Yun; Chen, Tainsong
2005-01-01
This paper proposes a method for electrocardiogram (ECG) heartbeat pattern recognition using adaptive wavelet network (AWN). The ECG beat recognition can be divided into a sequence of stages, starting from feature extraction and conversion of QRS complexes, and then identifying cardiac arrhythmias based on the detected features. The discrimination method of ECG beats is a two-subnetwork architecture, consisting of a wavelet layer and a probabilistic neural network (PNN). Morlet wavelets are used to extract the features from each heartbeat, and then PNN is used to analyze the meaningful features and perform discrimination tasks. The AWN is suitable for application in a dynamic environment, with add-in and delete-off features using automatic target adjustment and parameter tuning. The experimental results obtained by testing the data of the MIT-BIH arrhythmia database demonstrate the efficiency of the proposed method. PMID:17281539
Wavelet domain image restoration with adaptive edge-preserving regularization.
Belge, M; Kilmer, M E; Miller, E L
2000-01-01
In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data. PMID:18255433
Adaptive window-length detection of underwater transients using wavelets.
Carevic, Dragana
2005-05-01
This paper describes a detection method that adapts to unknown characteristics of the underlying transient signal, such as location, length, and time-frequency content. It applies a set of embedded detectors tuned to a number of signal partitions. The detectors are based on the wavelet theory, whereby two different techniques are examined, one using local Fourier transform and the other using discrete wavelet transform. The detection statistics are computed so as to enable prewhitening of unknown colored noise and to allow for a constant false-alarm rate detection. An adapted segmentation of the signal is next obtained with a goal of finding the largest detection statistics within each segment of the partition. The detectors are tested using several underwater acoustic transient signals buried in ambient sea noise. PMID:15957761
Solving Chemical Master Equations by an Adaptive Wavelet Method
Jahnke, Tobias; Galan, Steffen
2008-09-01
Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.
NASA Astrophysics Data System (ADS)
Ma, Xiang; Zabaras, Nicholas
2009-03-01
A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.
Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T
2008-01-01
The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.
Adaptive segmentation of wavelet transform coefficients for video compression
NASA Astrophysics Data System (ADS)
Wasilewski, Piotr
2000-04-01
This paper presents video compression algorithm suitable for inexpensive real-time hardware implementation. This algorithm utilizes Discrete Wavelet Transform (DWT) with the new Adaptive Spatial Segmentation Algorithm (ASSA). The algorithm was designed to obtain better or similar decompressed video quality in compare to H.263 recommendation and MPEG standard using lower computational effort, especially at high compression rates. The algorithm was optimized for hardware implementation in low-cost Field Programmable Gate Array (FPGA) devices. The luminance and chrominance components of every frame are encoded with 3-level Wavelet Transform with biorthogonal filters bank. The low frequency subimage is encoded with an ADPCM algorithm. For the high frequency subimages the new Adaptive Spatial Segmentation Algorithm is applied. It divides images into rectangular blocks that may overlap each other. The width and height of the blocks are set independently. There are two kinds of blocks: Low Variance Blocks (LVB) and High Variance Blocks (HVB). The positions of the blocks and the values of the WT coefficients belonging to the HVB are encoded with the modified zero-tree algorithms. LVB are encoded with the mean value. Obtained results show that presented algorithm gives similar or better quality of decompressed images in compare to H.263, even up to 5 dB in PSNR measure.
A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.
Gur, M Berke; Niezrecki, Christopher
2011-04-01
Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661
NASA Astrophysics Data System (ADS)
Ng, Desmond; Wong, Fu Tian; Withayachumnankul, Withawat; Findlay, David; Ferguson, Bradley; Abbott, Derek
2007-12-01
In this work we investigate new feature extraction algorithms on the T-ray response of normal human bone cells and human osteosarcoma cells. One of the most promising feature extraction methods is the Discrete Wavelet Transform (DWT). However, the classification accuracy is dependant on the specific wavelet base chosen. Adaptive wavelets circumvent this problem by gradually adapting to the signal to retain optimum discriminatory information, while removing redundant information. Using adaptive wavelets, classification accuracy, using a quadratic Bayesian classifier, of 96.88% is obtained based on 25 features. In addition, the potential of using rational wavelets rather than the standard dyadic wavelets in classification is explored. The advantage it has over dyadic wavelets is that it allows a better adaptation of the scale factor according to the signal. An accuracy of 91.15% is obtained through rational wavelets with 12 coefficients using a Support Vector Machine (SVM) as the classifier. These results highlight adaptive and rational wavelets as an efficient feature extraction method and the enormous potential of T-rays in cancer detection.
Adaptive Wavelet-Based Direct Numerical Simulations of Rayleigh-Taylor Instability
NASA Astrophysics Data System (ADS)
Reckinger, Scott J.
The compressible Rayleigh-Taylor instability (RTI) occurs when a fluid of low molar mass supports a fluid of higher molar mass against a gravity-like body force or in the presence of an accelerating front. Intrinsic to the problem are highly stratified background states, acoustic waves, and a wide range of physical scales. The objective of this thesis is to develop a specialized computational framework that addresses these challenges and to apply the advanced methodologies for direct numerical simulations of compressible RTI. Simulations are performed using the Parallel Adaptive Wavelet Collocation Method (PAWCM). Due to the physics-based adaptivity and direct error control of the method, PAWCM is ideal for resolving the wide range of scales present in RTI growth. Characteristics-based non-reflecting boundary conditions are developed for highly stratified systems to be used in conjunction with PAWCM. This combination allows for extremely long domains, which is necessary for observing the late time growth of compressible RTI. Initial conditions that minimize acoustic disturbances are also developed. The initialization is consistent with linear stability theory, where the background state consists of two diffusively mixed stratified fluids of differing molar masses. The compressibility effects on the departure from the linear growth, the onset of strong non-linear interactions, and the late-time behavior of the fluid structures are investigated. It is discovered that, for the thermal equilibrium case, the background stratification acts to suppress the instability growth when the molar mass difference is small. A reversal in this monotonic behavior is observed for large molar mass differences, where stratification enhances the bubble growth. Stratification also affects the vortex creation and the associated induced velocities. The enhancement and suppression of the RTI growth has important consequences for a detailed understanding of supernovae flame front
NASA Technical Reports Server (NTRS)
Momoh, James A.; Wang, Yanchun; Dolce, James L.
1997-01-01
This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.
Fast Fourier and Wavelet Transforms for Wavefront Reconstruction in Adaptive Optics
Dowla, F U; Brase, J M; Olivier, S S
2000-07-28
Wavefront reconstruction techniques using the least-squares estimators are computationally quite expensive. We compare wavelet and Fourier transforms techniques in addressing the computation issues of wavefront reconstruction in adaptive optics. It is shown that because the Fourier approach is not simply a numerical approximation technique unlike the wavelet method, the Fourier approach might have advantages in terms of numerical accuracy. However, strictly from a numerical computations viewpoint, the wavelet approximation method might have advantage in terms of speed. To optimize the wavelet method, a statistical study might be necessary to use the best basis functions or ''approximation tree.''
Morphology analysis of EKG R waves using wavelets with adaptive parameters derived from fuzzy logic
NASA Astrophysics Data System (ADS)
Caldwell, Max A.; Barrington, William W.; Miles, Richard R.
1996-03-01
Understanding of the EKG components P, QRS (R wave), and T is essential in recognizing cardiac disorders and arrhythmias. An estimation method is presented that models the R wave component of the EKG by adaptively computing wavelet parameters using fuzzy logic. The parameters are adaptively adjusted to minimize the difference between the original EKG waveform and the wavelet. The R wave estimate is derived from minimizing the combination of mean squared error (MSE), amplitude difference, spread difference, and shift difference. We show that the MSE in both non-noise and additive noise environment is less using an adaptive wavelet than a static wavelet. Research to date has focused on the R wave component of the EKG signal. Extensions of this method to model P and T waves are discussed.
Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation
NASA Astrophysics Data System (ADS)
Lei, Sheau-Fang; Tung, Ying-Kai
Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.
NASA Astrophysics Data System (ADS)
Yu, Ya-Huei; Ho, Chien-Peng; Tsai, Chun-Jen
2007-12-01
Scalable video coding (SVC) has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang
2016-02-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses
Wavelet based ECG compression with adaptive thresholding and efficient coding.
Alshamali, A
2010-01-01
This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
Yang, Zijing; Cai, Ligang; Gao, Lixin; Wang, Huaqing
2012-01-01
A least square method based on data fitting is proposed to construct a new lifting wavelet, together with the nonlinear idea and redundant algorithm, the adaptive redundant lifting transform based on fitting is firstly stated in this paper. By variable combination selections of basis function, sample number and dimension of basis function, a total of nine wavelets with different characteristics are constructed, which are respectively adopted to perform redundant lifting wavelet transforms on low-frequency approximate signals at each layer. Then the normalized lP norms of the new node-signal obtained through decomposition are calculated to adaptively determine the optimal wavelet for the decomposed approximate signal. Next, the original signal is taken for subsection power spectrum analysis to choose the node-signal for single branch reconstruction and demodulation. Experiment signals and engineering signals are respectively used to verify the above method and the results show that bearing faults can be diagnosed more effectively by the method presented here than by both spectrum analysis and demodulation analysis. Meanwhile, compared with the symmetrical wavelets constructed with Lagrange interpolation algorithm, the asymmetrical wavelets constructed based on data fitting are more suitable in feature extraction of fault signal of roller bearings. PMID:22666035
Spatially adaptive Bayesian wavelet thresholding for speckle removal in medical ultrasound images
NASA Astrophysics Data System (ADS)
Hou, Jianhua; Xiong, Chengyi; Chen, Shaoping; He, Xiang
2007-12-01
In this paper, a novel spatially adaptive wavelet thresholding method based on Bayesian maximum a posteriori (MAP) criterion is proposed for speckle removal in medical ultrasound (US) images. The method firstly performs logarithmical transform to original speckled ultrasound image, followed by redundant wavelet transform. The proposed method uses the Rayleigh distribution for speckle wavelet coefficients and Laplacian distribution for modeling the statistics of wavelet coefficients due to signal. A Bayesian estimator with analytical formula is derived from MAP estimation, and the resulting formula is proven to be equivalent to soft thresholding in nature which makes the algorithm very simple. In order to exploit the correlation among wavelet coefficients, the parameters of Laplacian model are assumed to be spatially correlated and can be computed from the coefficients in a neighboring window, thus making our method spatially adaptive in wavelet domain. Theoretical analysis and simulation experiment results show that this proposed method can effectively suppress speckle noise in medical US images while preserving as much as possible important signal features and details.
A wavelet approach to binary blackholes with asynchronous multitasking
NASA Astrophysics Data System (ADS)
Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo
2016-03-01
Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.
Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture
NASA Astrophysics Data System (ADS)
Anandkumar, Janavikulam; Szu, Harold H.
1995-04-01
This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.
Serial identification of EEG patterns using adaptive wavelet-based analysis
NASA Astrophysics Data System (ADS)
Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.
2013-10-01
A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these
Mouse EEG spike detection based on the adapted continuous wavelet transform
NASA Astrophysics Data System (ADS)
Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.
2016-04-01
Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.
Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D
2012-09-01
Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional
Non-parametric transient classification using adaptive wavelets
NASA Astrophysics Data System (ADS)
Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.
2015-11-01
Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.
An image adaptive, wavelet-based watermarking of digital images
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia
2007-12-01
In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.
Isotropic boundary adapted wavelets for coherent vorticity extraction in turbulent channel flows
NASA Astrophysics Data System (ADS)
Farge, Marie; Sakurai, Teluo; Yoshimatsu, Katsunori; Schneider, Kai; Morishita, Koji; Ishihara, Takashi
2015-11-01
We present a construction of isotropic boundary adapted wavelets, which are orthogonal and yield a multi-resolution analysis. We analyze DNS data of turbulent channel flow computed at a friction-velocity based Reynolds number of 395 and investigate the role of coherent vorticity. Thresholding of the wavelet coefficients allows to split the flow into two parts, coherent and incoherent vorticity. The statistics of the former, i.e., energy and enstrophy spectra, are close to the ones of the total flow, and moreover the nonlinear energy budgets are well preserved. The remaining incoherent part, represented by the large majority of the weak wavelet coefficients, corresponds to a structureless, i.e., noise-like, background flow and exhibits an almost equi-distribution of energy.
Adaptive inpainting algorithm based on DCT induced wavelet regularization.
Li, Yan-Ran; Shen, Lixin; Suter, Bruce W
2013-02-01
In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting. PMID:23060331
Koch, M.
1995-12-31
A new mesh-adaptive 1D collocation technique has been developed to efficiently solve transient advection-dominated transport problems in porous media that are governed by a hyperbolic/parabolic (singularly perturbed) PDE. After spatial discretization a singularly perturbed ODE is obtained which is solved by a modification of the COLNEW ODE-collocation code. The latter also contains an adaptive mesh procedure that has been enhanced here to resolve linear and nonlinear transport flow problems with steep fronts where regular FD and FE methods often fail. An implicit first-order backward Euler and a third-order Taylor-Donea technique are employed for the time integration. Numerical simulations on a variety of high Peclet-number transport phenomena as they occur in realistic porous media flow situations are presented. Examples include classical linear advection-diffusion, nonlinear adsorption, two-phase Buckley-Leverett flow without and with capillary forces (Rapoport-Leas equation) and Burgers` equation for inviscid fluid flow. In most of these examples sharp fronts and/or shocks develop which are resolved in an oscillation-free manner by the present adaptive collocation method. The backward Euler method has some amount of numerical dissipation is observed when the time-steps are too large. The third-order Taylor-Donea technique is less dissipative but is more prone to numerical oscillations. The simulations show that for the efficient solution of nonlinear singularly perturbed PDE`s governing flow transport a careful balance must be struck between the optimal mesh adaptation, the nonlinear iteration method and the time-stepping procedure. More theoretical research is needed with this regard.
Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform
NASA Astrophysics Data System (ADS)
Wu, Zhi-guo; Wang, Ming-jia; Han, Guang-liang
2011-08-01
Being an efficient method of information fusion, image fusion has been used in many fields such as machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing, including segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then, based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover
Wavelet-based acoustic emission detection method with adaptive thresholding
NASA Astrophysics Data System (ADS)
Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl
2000-06-01
Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.
A new method for beam-damage-diagnosis using adaptive fuzzy neural structure and wavelet analysis
NASA Astrophysics Data System (ADS)
Nguyen, Sy Dzung; Ngo, Kieu Nhi; Tran, Quang Thinh; Choi, Seung-Bok
2013-08-01
In this work, we present a new beam-damage-locating (BDL) method based on an algorithm which is a combination of an adaptive fuzzy neural structure (AFNS) and an average quantity solution to wavelet transform coefficient (AQWTC) of beam vibration signal. The AFNS is used for remembering undamaged-beam dynamic properties, while the AQWTC is used for signal analysis. Firstly, the beam is divided into elements and excited to be vibrated. Vibrating signal at each element, which is displacement in this work, is measured, filtered and transformed into wavelet signal with a used-scale-sheet to calculate the corresponding difference of AQWTC between two cases: undamaged status and the status at the checked time. Database about this difference is then used for finding out the elements having strange features in wavelet quantitative analysis, which directly represents the beam-damage signs. The effectiveness of the proposed approach which combines fuzzy neural structure and wavelet transform methods is demonstrated by experiment on measured data sets in a vibrated beam-type steel frame structure. `
NASA Astrophysics Data System (ADS)
Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui
2012-04-01
Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.
An adaptive wavelet-based deblocking algorithm for MPEG-4 codec
NASA Astrophysics Data System (ADS)
Truong, Trieu-Kien; Chen, Shi-Huang; Jhang, Rong-Yi
2005-08-01
This paper proposed an adaptive wavelet-based deblocking algorithm for MPEG-4 video coding standard. The novelty of this method is that the deblocking filter uses a wavelet-based threshold to detect and analyze artifacts on coded block boundaries. This threshold value is based on the difference between the wavelet transform coefficients of image blocks and the coefficients of the entire image. Therefore, the threshold value is made adaptive to different images and characteristics of blocking artifacts. Then one can attenuate those artifacts by applying a selected filter based on the above threshold value. It is shown in this paper that the proposed method is robust, fast, and works remarkably well for MPEG-4 codec at low bit rates. Another advantage of the new method is that it retains sharp features in the decoded frames since it only removes artifacts. Experimental results show that the proposed method can achieve a significantly improved visual quality and increase the PSNR in the decoded video frame.
Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.
Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J
2015-11-01
Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems. PMID:25955990
NASA Astrophysics Data System (ADS)
Fernandez, Sergio; Gdeisat, Munther A.; Salvi, Joaquim; Burton, David
2011-06-01
Fringe pattern analysis in coded structured light constitutes an active field of research. Techniques based on first projecting a sinusoidal pattern and then recovering the phase deviation permit the computation of the phase map and its corresponding depth map, leading to a dense acquisition of the measuring object. Among these techniques, the ones based on time-frequency analysis permit to extract the depth map from a single image, thus having potential applications measuring moving objects. The main techniques are Fourier Transform (FT), Windowed Fourier Transform (WFT) and Wavelet Transform (WT). This paper first analyzes the pros and cons of these three techniques, then a new algorithm for the automatic selection of the window size in WFT is proposed. This algorithm is compared to the traditional WT using adapted mother wavelet signals both with simulated and real objects, showing the performance results for quantitative and qualitative evaluations of the new method.
Design of adaptive fuzzy wavelet neural sliding mode controller for uncertain nonlinear systems.
Shahriari kahkeshi, Maryam; Sheikholeslam, Farid; Zekri, Maryam
2013-05-01
This paper proposes novel adaptive fuzzy wavelet neural sliding mode controller (AFWN-SMC) for a class of uncertain nonlinear systems. The main contribution of this paper is to design smooth sliding mode control (SMC) for a class of high-order nonlinear systems while the structure of the system is unknown and no prior knowledge about uncertainty is available. The proposed scheme composed of an Adaptive Fuzzy Wavelet Neural Controller (AFWNC) to construct equivalent control term and an Adaptive Proportional-Integral (A-PI) controller for implementing switching term to provide smooth control input. Asymptotical stability of the closed loop system is guaranteed, using the Lyapunov direct method. To show the efficiency of the proposed scheme, some numerical examples are provided. To validate the results obtained by proposed approach, some other methods are adopted from the literature and applied for comparison. Simulation results show superiority and capability of the proposed controller to improve the steady state performance and transient response specifications by using less numbers of fuzzy rules and on-line adaptive parameters in comparison to other methods. Furthermore, control effort has considerably decreased and chattering phenomenon has been completely removed. PMID:23453235
Lemeshewsky, G.P.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
A wavelet-optimized, very high order adaptive grid and order numerical method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1996-01-01
Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.
NASA Astrophysics Data System (ADS)
Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda
2016-03-01
Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.
Adaptive wavelet simulation of global ocean dynamics using a new Brinkman volume penalization
NASA Astrophysics Data System (ADS)
Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.
2015-12-01
In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one-dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.
From wavelets to adaptive approximations: time-frequency parametrization of EEG.
Durka, Piotr J
2003-01-01
This paper presents a summary of time-frequency analysis of the electrical activity of the brain (EEG). It covers in details two major steps: introduction of wavelets and adaptive approximations. Presented studies include time-frequency solutions to several standard research and clinical problems, encountered in analysis of evoked potentials, sleep EEG, epileptic activities, ERD/ERS and pharmaco-EEG. Based upon these results we conclude that the matching pursuit algorithm provides a unified parametrization of EEG, applicable in a variety of experimental and clinical setups. This conclusion is followed by a brief discussion of the current state of the mathematical and algorithmical aspects of adaptive time-frequency approximations of signals. PMID:12605721
Zhu, Xiaojun; Lei, Guangtsai; Pan, Guangwen
1997-04-01
In this paper, the continuous operator is discretized into matrix forms by Galerkin`s procedure, using periodic Battle-Lemarie wavelets as basis/testing functions. The polynomial decomposition of wavelets is applied to the evaluation of matrix elements, which makes the computational effort of the matrix elements no more expensive than that of method of moments (MoM) with conventional piecewise basis/testing functions. A new algorithm is developed employing the fast wavelet transform (FWT). Owing to localization, cancellation, and orthogonal properties of wavelets, very sparse matrices have been obtained, which are then solved by the LSQR iterative method. This algorithm is also adaptive in that one can add at will finer wavelet bases in the regions where fields vary rapidly, without any damage to the system orthogonality of the wavelet basis functions. To demonstrate the effectiveness of the new algorithm, we applied it to the evaluation of frequency-dependent resistance and inductance matrices of multiple lossy transmission lines. Numerical results agree with previously published data and laboratory measurements. The valid frequency range of the boundary integral equation results has been extended two to three decades in comparison with the traditional MoM approach. The new algorithm has been integrated into the computer aided design tool, MagiCAD, which is used for the design and simulation of high-speed digital systems and multichip modules Pan et al. 29 refs., 7 figs., 6 tabs.
Goffin, Mark A.; Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C.; Smith, Paul N.; Smedley-Stevenson, Richard P.
2015-01-15
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.
Removal of ocular artifacts from EEG using adaptive thresholding of wavelet coefficients
NASA Astrophysics Data System (ADS)
Krishnaveni, V.; Jayaraman, S.; Anitha, L.; Ramadoss, K.
2006-12-01
Electroencephalogram (EEG) gives researchers a non-invasive way to record cerebral activity. It is a valuable tool that helps clinicians to diagnose various neurological disorders and brain diseases. Blinking or moving the eyes produces large electrical potential around the eyes known as electrooculogram. It is a non-cortical activity which spreads across the scalp and contaminates the EEG recordings. These contaminating potentials are called ocular artifacts (OAs). Rejecting contaminated trials causes substantial data loss, and restricting eye movements/blinks limits the possible experimental designs and may affect the cognitive processes under investigation. In this paper, a nonlinear time-scale adaptive denoising system based on a wavelet shrinkage scheme has been used for removing OAs from EEG. The time-scale adaptive algorithm is based on Stein's unbiased risk estimate (SURE) and a soft-like thresholding function which searches for optimal thresholds using a gradient based adaptive algorithm is used. Denoising EEG with the proposed algorithm yields better results in terms of ocular artifact reduction and retention of background EEG activity compared to non-adaptive thresholding methods and the JADE algorithm.
Singh, Omkar; Sunkaria, Ramesh Kumar
2015-01-01
Separating an information-bearing signal from the background noise is a general problem in signal processing. In a clinical environment during acquisition of an electrocardiogram (ECG) signal, The ECG signal is corrupted by various noise sources such as powerline interference (PLI), baseline wander and muscle artifacts. This paper presents novel methods for reduction of powerline interference in ECG signals using empirical wavelet transform (EWT) and adaptive filtering. The proposed methods are compared with the empirical mode decomposition (EMD) based PLI cancellation methods. A total of six methods for PLI reduction based on EMD and EWT are analysed and their results are presented in this paper. The EWT-based de-noising methods have less computational complexity and are more efficient as compared with the EMD-based de-noising methods. PMID:25412942
NASA Astrophysics Data System (ADS)
Rastigejev, Y.; Semakin, A. N.
2012-12-01
In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as
Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-11-01
Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.
Aelterman, Jan; Goossens, Bart; De Vylder, Jonas; Pižurica, Aleksandra; Philips, Wilfried
2013-01-01
Most digital cameras use an array of alternating color filters to capture the varied colors in a scene with a single sensor chip. Reconstruction of a full color image from such a color mosaic is what constitutes demosaicing. In this paper, a technique is proposed that performs this demosaicing in a way that incurs a very low computational cost. This is done through a (dual-tree complex) wavelet interpretation of the demosaicing problem. By using a novel locally adaptive approach for demosaicing (complex) wavelet coefficients, we show that many of the common demosaicing artifacts can be avoided in an efficient way. Results demonstrate that the proposed method is competitive with respect to the current state of the art, but incurs a lower computational cost. The wavelet approach also allows for computationally effective denoising or deblurring approaches. PMID:23671575
Incidental Learning of Collocation
ERIC Educational Resources Information Center
Webb, Stuart; Newton, Jonathan; Chang, Anna
2013-01-01
This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…
Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana
2015-09-01
We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367
Space-time adaptive approach to variational data assimilation using wavelets
NASA Astrophysics Data System (ADS)
Souopgui, Innocent; Wieland, Scott A.; Yousuff Hussaini, M.; Vasilyev, Oleg V.
2016-02-01
This paper focuses on one of the main challenges of 4-dimensional variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by considering the time in the same fashion as the space variables, reformulating the mathematical model in the entire space-time domain, and solving the problem on a near optimal computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed form of the solution eliminates the need to save or recompute data for every time slice as it is typically done in traditional time marching approaches to 4-dimensional variational data assimilation. The reduction of the required computational degrees of freedom is achieved using the compression properties of multi-dimensional second generation wavelets. The simultaneous space-time discretization of both the forward and the adjoint models makes it possible to solve both models either concurrently or sequentially. In addition, the grid adaptation reduces the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The proposed approach is demonstrated for the advection diffusion problem in two space-time dimensions.
Pipek, János; Nagy, Szilvia
2013-03-01
The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109
Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.
Vikhe, P S; Thool, V R
2016-04-01
Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection. PMID:26811073
NASA Astrophysics Data System (ADS)
Rastigejev, Y.; Semakin, A. N.
2013-12-01
Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical
Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network
NASA Astrophysics Data System (ADS)
Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao
2009-10-01
A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.
Anatomically-adapted graph wavelets for improved group-level fMRI activation mapping.
Behjat, Hamid; Leonardi, Nora; Sörnmo, Leif; Van De Ville, Dimitri
2015-12-01
A graph based framework for fMRI brain activation mapping is presented. The approach exploits the spectral graph wavelet transform (SGWT) for the purpose of defining an advanced multi-resolutional spatial transformation for fMRI data. The framework extends wavelet based SPM (WSPM), which is an alternative to the conventional approach of statistical parametric mapping (SPM), and is developed specifically for group-level analysis. We present a novel procedure for constructing brain graphs, with subgraphs that separately encode the structural connectivity of the cerebral and cerebellar gray matter (GM), and address the inter-subject GM variability by the use of template GM representations. Graph wavelets tailored to the convoluted boundaries of GM are then constructed as a means to implement a GM-based spatial transformation on fMRI data. The proposed approach is evaluated using real as well as semi-synthetic multi-subject data. Compared to SPM and WSPM using classical wavelets, the proposed approach shows superior type-I error control. The results on real data suggest a higher detection sensitivity as well as the capability to capture subtle, connected patterns of brain activity. PMID:26057594
A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system
Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre
2008-08-10
In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to
NASA Astrophysics Data System (ADS)
Grenga, Temistocle
The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.
Avci, Derya; Leblebicioglu, Mehmet Kemal; Poyraz, Mustafa; Dogantekin, Esin
2014-02-01
So far, analysis and classification of urine cells number has become an important topic for medical diagnosis of some diseases. Therefore, in this study, we suggest a new technique based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for Recognition of Urine Cells from Microscopic Images Independent of Rotation and Scaling. Some digital image processing methods such as noise reduction, contrast enhancement, segmentation, and morphological process are used for feature extraction stage of this ADWEENN in this study. Nowadays, the image processing and pattern recognition topics have come into prominence. The image processing concludes operation and design of systems that recognize patterns in data sets. In the past years, very difficulty in classification of microscopic images was the deficiency of enough methods to characterize. Lately, it is seen that, multi-resolution image analysis methods such as Gabor filters, discrete wavelet decompositions are superior to other classic methods for analysis of these microscopic images. In this study, the structure of the ADWEENN method composes of four stages. These are preprocessing stage, feature extraction stage, classification stage and testing stage. The Discrete Wavelet Transform (DWT) and adaptive wavelet entropy and energy is used for adaptive feature extraction in feature extraction stage to strengthen the premium features of the Artificial Neural Network (ANN) classifier in this study. Efficiency of the developed ADWEENN method was tested showing that an avarage of 97.58% recognition succes was obtained. PMID:24493072
Graham, Ryan B.; Wachowiak, Mark P.; Gurd, Brendon J.
2015-01-01
Peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α) is a transcription factor co-activator that helps coordinate mitochondrial biogenesis within skeletal muscle following exercise. While evidence gleaned from submaximal exercise suggests that intracellular pathways associated with the activation of PGC-1α, as well as the expression of PGC-1α itself are activated to a greater extent following higher intensities of exercise, we have recently shown that this effect does not extend to supramaximal exercise, despite corresponding increases in muscle activation amplitude measured with electromyography (EMG). Spectral analyses of EMG data may provide a more in-depth assessment of changes in muscle electrophysiology occurring across different exercise intensities, and therefore the goal of the present study was to apply continuous wavelet transforms (CWTs) to our previous data to comprehensively evaluate: 1) differences in muscle electrophysiological properties at different exercise intensities (i.e. 73%, 100%, and 133% of peak aerobic power), and 2) muscular effort and fatigue across a single interval of exercise at each intensity, in an attempt to shed mechanistic insight into our previous observations that the increase in PGC-1α is dissociated from exercise intensity following supramaximal exercise. In general, the CWTs revealed that localized muscle fatigue was only greater than the 73% condition in the 133% exercise intensity condition, which directly matched the work rate results. Specifically, there were greater drop-offs in frequency, larger changes in burst power, as well as greater changes in burst area under this intensity, which were already observable during the first interval. As a whole, the results from the present study suggest that supramaximal exercise causes extreme localized muscular fatigue, and it is possible that the blunted PGC-1α effects observed in our previous study are the result of fatigue-associated increases in
Graham, Ryan B; Wachowiak, Mark P; Gurd, Brendon J
2015-01-01
Peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α) is a transcription factor co-activator that helps coordinate mitochondrial biogenesis within skeletal muscle following exercise. While evidence gleaned from submaximal exercise suggests that intracellular pathways associated with the activation of PGC-1α, as well as the expression of PGC-1α itself are activated to a greater extent following higher intensities of exercise, we have recently shown that this effect does not extend to supramaximal exercise, despite corresponding increases in muscle activation amplitude measured with electromyography (EMG). Spectral analyses of EMG data may provide a more in-depth assessment of changes in muscle electrophysiology occurring across different exercise intensities, and therefore the goal of the present study was to apply continuous wavelet transforms (CWTs) to our previous data to comprehensively evaluate: 1) differences in muscle electrophysiological properties at different exercise intensities (i.e. 73%, 100%, and 133% of peak aerobic power), and 2) muscular effort and fatigue across a single interval of exercise at each intensity, in an attempt to shed mechanistic insight into our previous observations that the increase in PGC-1α is dissociated from exercise intensity following supramaximal exercise. In general, the CWTs revealed that localized muscle fatigue was only greater than the 73% condition in the 133% exercise intensity condition, which directly matched the work rate results. Specifically, there were greater drop-offs in frequency, larger changes in burst power, as well as greater changes in burst area under this intensity, which were already observable during the first interval. As a whole, the results from the present study suggest that supramaximal exercise causes extreme localized muscular fatigue, and it is possible that the blunted PGC-1α effects observed in our previous study are the result of fatigue-associated increases in
NASA Astrophysics Data System (ADS)
Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun
2016-05-01
The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.
NASA Astrophysics Data System (ADS)
Pan, Jun; Chen, Jinglong; Zi, Yanyang; Li, Yueming; He, Zhengjia
2016-05-01
Due to the multi-modulation feature in most of the vibration signals, the extraction of embedded fault information from condition monitoring data for mechanical fault diagnosis still is not a relaxed task. Despite the reported achievements, Wavelet transform follows the dyadic partition scheme and would not allow a data-driven frequency partition. And then Empirical Wavelet Transform (EWT) is used to extract inherent modulation information by decomposing signal into mono-components under an orthogonal basis and non-dyadic partition scheme. However, the pre-defined segment way of Fourier spectrum without dependence on analyzed signals may result in inaccurate mono-component identification. In this paper, the modified EWT (MEWT) method via data-driven adaptive Fourier spectrum segment is proposed for mechanical fault identification. First, inner product is calculated between the Fourier spectrum of analyzed signal and Gaussian function for scale representation. Then, adaptive spectrum segment is achieved by detecting local minima of the scale representation. Finally, empirical modes can be obtained by adaptively merging mono-components based on their envelope spectrum similarity. The adaptively extracted empirical modes are analyzed for mechanical fault identification. A simulation experiment and two application cases are used to verify the effectiveness of the proposed method and the results show its outstanding performance.
NASA Technical Reports Server (NTRS)
Kempel, Leo C.
1992-01-01
Wavelets are an exciting new topic in applied mathematics and signal processing. This paper will provide a brief review of wavelets which are also known as families of functions with an emphasis on interpretation rather than rigor. We will derive an indirect use of wavelets for the solution of integral equations based techniques adapted from image processing. Examples for resistive strips will be given illustrating the effect of these techniques as well as their promise in reducing dramatically the requirement in order to solve an integral equation for large bodies. We also will present a direct implementation of wavelets to solve an integral equation. Both methods suggest future research topics and may hold promise for a variety of uses in computational electromagnetics.
ERIC Educational Resources Information Center
Webb, Stuart; Kagimoto, Eve
2011-01-01
This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…
Interlanguage Development and Collocational Clash
ERIC Educational Resources Information Center
Shahheidaripour, Gholamabbass
2000-01-01
Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at
Schlossnagle, G.; Restrepo, J.M.; Leaf, G.K.
1993-12-01
The properties of periodized Daubechies wavelets on [0,1] are detailed and contrasted against their counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrate by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and several tabulated values are included.
Wavelets based on Hermite cubic splines
NASA Astrophysics Data System (ADS)
Cvejnová, Daniela; Černá, Dana; Finěk, Václav
2016-06-01
In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.
The use of wavelet transforms in the solution of two-phase flow problems
Moridis, G.J.; Nikolaou, M.; You, Yong
1994-10-01
In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
ERIC Educational Resources Information Center
Miyakoshi, Tomoko
2009-01-01
Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…
Generalized orthogonal wavelet phase reconstruction.
Axtell, Travis W; Cristi, Roberto
2013-05-01
Phase reconstruction is used for feedback control in adaptive optics systems. To achieve performance metrics for high actuator density or with limited processing capabilities on spacecraft, a wavelet signal processing technique is advantageous. Previous derivations of this technique have been limited to the Haar wavelet. This paper derives the relationship and algorithms to reconstruct phase with O(n) computational complexity for wavelets with the orthogonal property. This has additional benefits for performance with noise in the measurements. We also provide details on how to handle the boundary condition for telescope apertures. PMID:23695316
Collocation and Technicality in EAP Engineering
ERIC Educational Resources Information Center
Ward, Jeremy
2007-01-01
This article explores how collocation relates to lexical technicality, and how the relationship can be exploited for teaching EAP to second-year engineering students. First, corpus data are presented to show that complex noun phrase formation is a ubiquitous feature of engineering text, and that these phrases (or collocations) are highly…
Supporting Collocation Learning with a Digital Library
ERIC Educational Resources Information Center
Wu, Shaoqun; Franken, Margaret; Witten, Ian H.
2010-01-01
Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…
NASA Astrophysics Data System (ADS)
Isah, Abdulnasir; Chang, Phang
2016-06-01
In this article we propose the wavelet operational method based on shifted Legendre polynomial to obtain the numerical solutions of non-linear systems of fractional order differential equations (NSFDEs). The operational matrix of fractional derivative derived through wavelet-polynomial transformation are used together with the collocation method to turn the NSFDEs to a system of non-linear algebraic equations. Illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757
A multilevel stochastic collocation method for SPDEs
Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton
2015-03-10
We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.
Integrated wavelets for medical image analysis
NASA Astrophysics Data System (ADS)
Heinlein, Peter; Schneider, Wilfried
2003-11-01
Integrated wavelets are a new method for discretizing the continuous wavelet transform (CWT). Independent of the choice of discrete scale and orientation parameters they yield tight families of convolution operators. Thus these families can easily be adapted to specific problems. After presenting the fundamental ideas, we focus primarily on the construction of directional integrated wavelets and their application to medical images. We state an exact algorithm for implementing this transform and present applications from the field of digital mammography. The first application covers the enhancement of microcalcifications in digital mammograms. Further, we exploit the directional information provided by integrated wavelets for better separation of microcalcifications from similar structures.
ERIC Educational Resources Information Center
Goudarzi, Zahra; Moini, M. Raouf
2012-01-01
Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…
The Impact of Corpus-Based Collocation Instruction on Iranian EFL Learners' Collocation Learning
ERIC Educational Resources Information Center
Ashouri, Shabnam; Arjmandi, Masoume; Rahimi, Ramin
2014-01-01
Over the past decades, studies of EFL/ESL vocabulary acquisition have identified the significance of collocations in language learning. Due to the fact that collocations have been regarded as one of the major concerns of both EFL teachers and learners for many years, the present study attempts to shed light on the impact of corpus-based…
ERIC Educational Resources Information Center
Wolter, Brent; Gyllstad, Henrik
2013-01-01
This study investigated the influence of frequency effects on the processing of congruent (i.e., having an equivalent first language [L1] construction) collocations and incongruent (i.e., not having an equivalent L1 construction) collocations in a second language (L2). An acceptability judgment task was administered to native and advanced…
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Standards for physical collocation and virtual collocation. 51.323 Section 51.323 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Additional Obligations of Incumbent Local Exchange Carriers § 51.323 Standards for...
47 CFR 51.323 - Standards for physical collocation and virtual collocation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Standards for physical collocation and virtual collocation. 51.323 Section 51.323 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Additional Obligations of Incumbent Local Exchange Carriers § 51.323 Standards for...
Perceptually Lossless Wavelet Compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John
1996-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
The use of wavelet transformations in the solution of two-phase flow problems
Moridis, G.J.; Nikolaou, M.; You, Y.
1995-12-31
In this paper the authors present the use of wavelets to solve the non-linear Partial Differential Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt change, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigation at nay spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. The authors determine that the Chui-Wang wavelets and a collection method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. The results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts.
NASA Astrophysics Data System (ADS)
Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai
2014-06-01
Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.
Results of laser ranging collocations during 1983
NASA Technical Reports Server (NTRS)
Kolenkiewicz, R.
1984-01-01
The objective of laser ranging collocations is to compare the ability of two satellite laser ranging systems, located in the vicinity of one another, to measure the distance to an artificial Earth satellite in orbit over the sites. The similar measurement of this distance is essential before a new or modified laser system is deployed to worldwide locations in order to gather the data necessary to meet the scientific goals of the Crustal Dynamics Project. In order to be certain the laser systems are operating properly, they are periodically compared with each other. These comparisons or collocations are performed by locating the lasers side by side when they track the same satellite during the same time or pass. The data is then compared to make sure the lasers are giving essentially the same range results. Results of the three collocations performed during 1983 are given.
NASA Astrophysics Data System (ADS)
Swaidan, Waleeda; Hussin, Amran
2015-10-01
Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.
Stochastic Collocation Method for Three-dimensional Groundwater Flow
NASA Astrophysics Data System (ADS)
Shi, L.; Zhang, D.
2008-12-01
The stochastic collocation method (SCM) has recently gained extensive attention in several disciplines. The numerical implementation of SCM only requires repetitive runs of an existing deterministic solver or code as in the Monte Carlo simulation. But it is generally much more efficient than the Monte Carlo method. In this paper, the stochastic collocation method is used to efficiently qualify uncertainty of three-dimensional groundwater flow. We introduce the basic principles of common collocation methods, i.e., the tensor product collocation method (TPCM), Smolyak collocation method (SmCM), Stround-2 collocation method (StCM), and probability collocation method (PCM). Their accuracy, computational cost, and limitation are discussed. Illustrative examples reveal that the seamless combination of collocation techniques and existing simulators makes the new framework possible to efficiently handle complex stochastic problems.
3D steerable wavelets in practice.
Chenouard, Nicolas; Unser, Michael
2012-11-01
We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems. PMID:22752138
Gauging the Effects of Exercises on Verb-Noun Collocations
ERIC Educational Resources Information Center
Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart
2014-01-01
Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…
Corpus-Based versus Traditional Learning of Collocations
ERIC Educational Resources Information Center
Daskalovska, Nina
2015-01-01
One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…
Is "Absorb Knowledge" an Improper Collocation?
ERIC Educational Resources Information Center
Su, Yujie
2010-01-01
Collocation is practically very tough to Chinese English learners. The main reason lies in the fact that English and Chinese belong to two distinct language systems. And the deep reason is that learners tend to develop different metaphorical concept in accordance with distinct ways of thinking in Chinese. The paper, taking "absorb…
A Collocation Method for Volterra Integral Equations
NASA Astrophysics Data System (ADS)
Kolk, Marek
2010-09-01
We propose a piecewise polynomial collocation method for solving linear Volterra integral equations of the second kind with logarithmic kernels which, in addition to a diagonal singularity, may have a singularity at the initial point of the interval of integration. An attainable order of the convergence of the method is studied. We illustrate our results with a numerical example.
Research of Gear Fault Detection in Morphological Wavelet Domain
NASA Astrophysics Data System (ADS)
Hong, Shi; Fang-jian, Shan; Bo, Cong; Wei, Qiu
2016-02-01
For extracting mutation information from gear fault signal and achieving a valid fault diagnosis, a gear fault diagnosis method based on morphological mean wavelet transform was designed. Morphological mean wavelet transform is a linear wavelet in the framework of morphological wavelet. Decomposing gear fault signal by this morphological mean wavelet transform could produce signal synthesis operators and detailed synthesis operators. For signal synthesis operators, it was just close to orginal signal, and for detailed synthesis operators, it contained fault impact signal or interference signal and could be catched. The simulation experiment result indicates that, compared with Fourier transform, the morphological mean wavelet transform method can do time-frequency analysis for original signal, effectively catch impact signal appears position; and compared with traditional linear wavelet transform, it has simple structure, easy realization, signal local extremum sensitivity and high denoising ability, so it is more adapted to gear fault real-time detection.
Schwarz and multilevel methods for quadratic spline collocation
Christara, C.C.; Smith, B.
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
Directional spherical multipole wavelets
Hayn, Michael; Holschneider, Matthias
2009-07-15
We construct a family of admissible analysis reconstruction pairs of wavelet families on the sphere. The construction is an extension of the isotropic Poisson wavelets. Similar to those, the directional wavelets allow a finite expansion in terms of off-center multipoles. Unlike the isotropic case, the directional wavelets are not a tight frame. However, at small scales, they almost behave like a tight frame. We give an explicit formula for the pseudodifferential operator given by the combination analysis-synthesis with respect to these wavelets. The Euclidean limit is shown to exist and an explicit formula is given. This allows us to quantify the asymptotic angular resolution of the wavelets.
Evaluating techniques for multivariate classification of non-collocated spatial data.
McKenna, Sean Andrew
2004-09-01
Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach
Collocation method for fractional quantum mechanics
Amore, Paolo; Hofmann, Christoph P.; Saenz, Ricardo A.; Fernandez, Francisco M.
2010-12-15
We show that it is possible to obtain numerical solutions to quantum mechanical problems involving a fractional Laplacian, using a collocation approach based on little sinc functions, which discretizes the Schroedinger equation on a uniform grid. The different boundary conditions are naturally implemented using sets of functions with the appropriate behavior. Good convergence properties are observed. A comparison with results based on a Wentzel-Kramers-Brillouin analysis is performed.
NASA Astrophysics Data System (ADS)
Jones, B. J. T.
Wavelet analysis has become a major tool in many aspects of data handling, whether it be statistical analysis, noise removal or image reconstruction. Wavelet analysis has worked its way into fields as diverse as economics, medicine, geophysics, music and cosmology.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method
NASA Astrophysics Data System (ADS)
Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto
2014-06-01
The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.
Fryer, M.O.
1997-05-01
This paper describes the use of wavelet transform techniques to analyze typical data found in industrial applications. A way of detecting system changes using wavelet transforms is described. The results of applying this method are described for several typical applications. The wavelet technique is compared with the use of Fourier transform methods.
Rotation and Scale Invariant Wavelet Feature for Content-Based Texture Image Retrieval.
ERIC Educational Resources Information Center
Lee, Moon-Chuen; Pun, Chi-Man
2003-01-01
Introduces a rotation and scale invariant log-polar wavelet texture feature for image retrieval. The underlying feature extraction process involves a log-polar transform followed by an adaptive row shift invariant wavelet packet transform. Experimental results show that this rotation and scale invariant wavelet feature is quite effective for image…
Digital audio signal filtration based on the dual-tree wavelet transform
NASA Astrophysics Data System (ADS)
Yaseen, A. S.; Pavlov, A. N.
2015-07-01
A new method of digital audio signal filtration based on the dual-tree wavelet transform is described. An adaptive approach is proposed that allows the automatic adjustment of parameters of the wavelet filter to be optimized. A significant improvement of the quality of signal filtration is demonstrated in comparison to the traditionally used filters based on the discrete wavelet transform.
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza; Vezolainen, Alexei; Vasilyev, Oleg V.
2011-11-01
With the recent development of parallel adaptive wavelet collocation method, adaptive numerical simulations of high Reynolds number turbulent flows have become feasible. The integration of turbulence modeling of different fidelity with adaptive wavelet methods results in a hierarchical approach for modeling and simulating turbulent flows in which all or most energetic parts of coherent eddies are dynamically resolved on self-adaptive computational grids, while modeling the effect of the unresolved incoherent or less energetic modes. This talk is the first attempt to estimate how spatial modes of both Coherent Vortex Simulations (CVS) and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) scale with Reynolds number. The computational complexity studies for both CVS and SCALES of linearly forced homogeneous turbulence are performed at effective non-adaptive resolutions of 2563, 5123, 10243, and 20483 corresponding to approximate Reλ of 70, 120, 190, 320. The details of the simulations are discussed and the results of compression achieved by CVS and SCALES as well as scalability studies of the parallel algorithm for the aforementioned Taylor micro-scale Reynolds numbers are presented. This work was supported by NSF under grant No. CBET-0756046.
Symplectic wavelet transformation.
Fan, Hong-Yi; Lu, Hai-Liang
2006-12-01
Usually a wavelet transform is based on dilated-translated wavelets. We propose a symplectic-transformed-translated wavelet family psi(*)(r,s)(z-kappa) (r,s are the symplectic transform parameters, |s|(2)-|r|(2)=1, kappa is a translation parameter) generated from the mother wavelet psi and the corresponding wavelet transformation W(psi)f(r,s;kappa)=integral(infinity)(-infinity)(d(2)z/pi)f(z)psi(*)(r,s)(z-kappa). This new transform possesses well-behaved properties and is related to the optical Fresnel transform in quantum mechanical version. PMID:17099740
NASA Astrophysics Data System (ADS)
Chang, Phang; Isah, Abdulnasir
2016-02-01
In this paper we propose the wavelet operational method based on shifted Legendre polynomial to obtain the numerical solutions of nonlinear fractional-order chaotic system known by fractional-order Brusselator system. The operational matrices of fractional derivative and collocation method turn the nonlinear fractional-order Brusselator system to a system of algebraic equations. Two illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques.
Analysis of chromatograph systems using orthogonal collocation
NASA Technical Reports Server (NTRS)
Woodrow, P. T.
1974-01-01
Research is generating fundamental engineering design techniques and concepts for the chromatographic separator of a chemical analysis system for an unmanned, Martian roving vehicle. A chromatograph model is developed which incorporates previously neglected transport mechanisms. The numerical technique of orthogonal collocation is studied. To establish the utility of the method, three models of increasing complexity are considered, the latter two being limiting cases of the derived model: (1) a simple, diffusion-convection model; (2) a rate of adsorption limited, inter-intraparticle model; and (3) an inter-intraparticle model with negligible mass transfer resistance.
Subcell resolution in simplex stochastic collocation for spatial discontinuities
NASA Astrophysics Data System (ADS)
Witteveen, Jeroen A. S.; Iaccarino, Gianluca
2013-10-01
Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC-SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers' equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC-SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.
Subcell resolution in simplex stochastic collocation for spatial discontinuities
Witteveen, Jeroen A.S.; Iaccarino, Gianluca
2013-10-15
Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC–SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers’ equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC–SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.
Profiling the Collocation Use in ELT Textbooks and Learner Writing
ERIC Educational Resources Information Center
Tsai, Kuei-Ju
2015-01-01
The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…
The Repetition of Collocations in EFL Textbooks: A Corpus Study
ERIC Educational Resources Information Center
Wang, Jui-hsin Teresa; Good, Robert L.
2007-01-01
The importance of repetition in the acquisition of lexical items has been widely acknowledged in single-word vocabulary research but has been relatively neglected in collocation studies. Since collocations are considered one key to achieving language fluency, and because learners spend a great amount of time interacting with their textbooks, the…
The Effect of Grouping and Presenting Collocations on Retention
ERIC Educational Resources Information Center
Akpinar, Kadriye Dilek; Bardakçi, Mehmet
2015-01-01
The aim of this study is two-fold. Firstly, it attempts to determine the role of presenting collocations by organizing them based on (i) the keyword, (ii) topic related and (iii) grammatical aspect on retention of collocations. Secondly, it investigates the relationship between participants' general English proficiency and the presentation types…
Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks
ERIC Educational Resources Information Center
Menon, Sujatha; Mukundan, Jayakaran
2012-01-01
This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…
Manchanda, P.; Meenakshi
2009-07-02
Recently Manchanda, Meenakshi and Siddiqi have studied Haar-Vilenkin wavelet and a special type of non-uniform multiresolution analysis. Haar-Vilenkin wavelet is a generalization of Haar wavelet. Motivated by the paper of Gabardo and Nashed we have introduced a class of multiresolution analysis extending the concept of classical multiresolution analysis. We present here a resume of these results. We hope that applications of these concepts to some significant real world problems could be found.
Visibility of Wavelet Quantization Noise
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)
1995-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Covariance modeling in geodetic applications of collocation
NASA Astrophysics Data System (ADS)
Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko
2014-05-01
Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.
Research on Medical Image Enhancement Algorithm Based on GSM Model for Wavelet Coefficients
NASA Astrophysics Data System (ADS)
Wang, Lei; Jiang, Nian-de; Ning, Xing
For the complexity and application diversity of medical CT image, this article presents a medical CT Image enhancing algorithm based on Gaussian Scale Mixture Model for wavelet coefficient in the study of wavelet multi-scale analysis. The noisy image is firstly denoised in auto-adapted Wiener filter. Secondly, through the qualitative analysis and classification of wavelet coefficients for the signal and noise, the wavelet's approximate distribution and statistical characteristics are described, combining GSM(Gaussian scale mixture) model for wavelet coefficient in this paper. It is shown that this algorithm can improve the denoised result and enhanced the medical CT image obviously.
Wavelet Analyses and Applications
ERIC Educational Resources Information Center
Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.
2009-01-01
It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…
Source Wavelet Phase Extraction
NASA Astrophysics Data System (ADS)
Naghadeh, Diako Hariri; Morley, Christopher Keith
2016-06-01
Extraction of propagation wavelet phase from seismic data can be conducted using first, second, third and fourth-order statistics. Three new methods are introduced, which are: (1) Combination of different moments, (2) Windowed continuous wavelet transform and (3) Maximum correlation with cosine function. To compare different methods synthetic data with and without noise were chosen. Results show that first, second and third order statistics are not able to preserve wavelet phase. Kurtosis can preserve propagation wavelet phase but signal-to-noise ratio can affect the extracted phase using this method. So for data set with low signal-to-noise ratio, it will be unstable. Using a combination of different moments to extract the phase is more robust than applying kurtosis. The improvement occurs because zero phase wavelets with reverse polarities have equal maximum kurtosis values hence the correct wavelet polarity cannot be identified. Zero-phase wavelets with reverse polarities have minimum and maximum values for a combination of different-moments method. These properties enable the technique to handle a finite data segment and to choose the correct wavelet polarity. Also, the existence of different moments can decrease sensitivity to outliers. A windowed continuous wavelet transform is more sensitive to signal-to-noise ratio than the combination of different-moments method, also if the scale for the wavelet is incorrect it will encounter with more problems to extract phase. When the effects of frequency bandwidth, signal-to-noise ratio and analyzing window length are considered, the results of extracting phase information from data without and with noise demonstrate that combination of different-moments is superior to the other methods introduced here.
Lifting wavelet method of target detection
NASA Astrophysics Data System (ADS)
Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin
2009-11-01
Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.
Developing and Evaluating a Web-Based Collocation Retrieval Tool for EFL Students and Teachers
ERIC Educational Resources Information Center
Chen, Hao-Jan Howard
2011-01-01
The development of adequate collocational knowledge is important for foreign language learners; nonetheless, learners often have difficulties in producing proper collocations in the target language. Among the various ways of learning collocations, the DDL (data-driven learning) approach encourages independent learning of collocations and allows…
The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners
ERIC Educational Resources Information Center
Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd
2011-01-01
An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…
Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers
ERIC Educational Resources Information Center
Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting
2016-01-01
The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…
The Learning Burden of Collocations: The Role of Interlexical and Intralexical Factors
ERIC Educational Resources Information Center
Peters, Elke
2016-01-01
This study investigates whether congruency (+/- literal translation equivalent), collocate-node relationship (adjective-noun, verb-noun, phrasal-verb-noun collocations), and word length influence the learning burden of EFL learners' learning collocations at the initial stage of form-meaning mapping. Eighteen collocations were selected on the basis…
Usability Study of Two Collocated Prototype System Displays
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.
2007-01-01
Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.
Periodized Daubechies wavelets
Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.
1996-03-01
The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.
EEG Artifact Removal Using a Wavelet Neural Network
NASA Technical Reports Server (NTRS)
Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom
2011-01-01
!n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.
Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles
NASA Astrophysics Data System (ADS)
Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.; Franco-Villoria, M.
2014-06-01
The quantification of measurement uncertainty of atmospheric parameters is a key factor in assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical contributions to the uncertainty budget is related to the collocation mismatch in space and time among observations made at different locations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or lidar. In this paper we propose a statistical modelling approach capable of explaining the relationship between collocation uncertainty and a set of environmental factors, height and distance between imperfectly collocated trajectories. The new statistical approach is based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows a natural definition of uncertainty profiles. Along this line, a five-fold decomposition of the total collocation uncertainty is proposed, giving both a profile budget and an integrated column budget. HFR is a data-driven approach valid for any atmospheric parameter, which can be assumed smooth. It is illustrated here by means of the collocation uncertainty analysis of relative humidity from two stations involved in the GCOS reference upper-air network (GRUAN). In this case, 85% of the total collocation uncertainty is ascribed to reducible environmental error, 11% to irreducible environmental error, 3.4% to adjustable bias, 0.1% to sampling error and 0.2% to measurement error.
Localized dynamic kinetic-energy-based models for stochastic coherent adaptive large eddy simulation
NASA Astrophysics Data System (ADS)
De Stefano, Giuliano; Vasilyev, Oleg V.; Goldstein, Daniel E.
2008-04-01
Stochastic coherent adaptive large eddy simulation (SCALES) is an extension of the large eddy simulation approach in which a wavelet filter-based dynamic grid adaptation strategy is employed to solve for the most "energetic" coherent structures in a turbulent field while modeling the effect of the less energetic background flow. In order to take full advantage of the ability of the method in simulating complex flows, the use of localized subgrid-scale models is required. In this paper, new local dynamic one-equation subgrid-scale models based on both eddy-viscosity and non-eddy-viscosity assumptions are proposed for SCALES. The models involve the definition of an additional field variable that represents the kinetic energy associated with the unresolved motions. This way, the energy transfer between resolved and residual flow structures is explicitly taken into account by the modeling procedure without an equilibrium assumption, as in the classical Smagorinsky approach. The wavelet-filtered incompressible Navier-Stokes equations for the velocity field, along with the additional evolution equation for the subgrid-scale kinetic energy variable, are numerically solved by means of the dynamically adaptive wavelet collocation solver. The proposed models are tested for freely decaying homogeneous turbulence at Reλ=72. It is shown that the SCALES results, obtained with less than 0.5% of the total nonadaptive computational nodes, closely match reference data from direct numerical simulation. In contrast to classical large eddy simulation, where the energetic small scales are poorly simulated, the agreement holds not only in terms of global statistical quantities but also in terms of spectral distribution of energy and, more importantly, enstrophy all the way down to the dissipative scales.
Entanglement Renormalization and Wavelets.
Evenbly, Glen; White, Steven R
2016-04-01
We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems. PMID:27104687
Entanglement Renormalization and Wavelets
NASA Astrophysics Data System (ADS)
Evenbly, Glen; White, Steven R.
2016-04-01
We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems.
Lagrange wavelets for signal processing.
Shi, Z; Wei, G W; Kouri, D J; Hoffman, D K; Bao, Z
2001-01-01
This paper deals with the design of interpolating wavelets based on a variety of Lagrange functions, combined with novel signal processing techniques for digital imaging. Halfband Lagrange wavelets, B-spline Lagrange wavelets and Gaussian Lagrange (Lagrange distributed approximating functional (DAF)) wavelets are presented as specific examples of the generalized Lagrange wavelets. Our approach combines the perceptually dependent visual group normalization (VGN) technique and a softer logic masking (SLM) method. These are utilized to rescale the wavelet coefficients, remove perceptual redundancy and obtain good visual performance for digital image processing. PMID:18255493
Daily water level forecasting using wavelet decomposition and artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.
2015-01-01
Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications
Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em
2008-11-20
Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Wavelet analysis of atmospheric turbulence
Hudgins, L.H.
1992-12-31
After a brief review of the elementary properties of Fourier Transforms, the Wavelet Transform is defined in Part I. Basic results are given for admissable wavelets. The Multiresolution Analysis, or MRA (a mathematical structure which unifies a large class of wavelets with Quadrature Mirror Filters) is then introduced. Some fundamental aspects of wavelet design are then explored. The Discrete Wavelet Transform is discussed and, in the context of an MRA, is seen to supply a Fast Wavelet Transform which competes with the Fast Fourier Transform for efficiency. In Part II, the Wavelet Transform is developed in terms of the scale number variable s instead of the scale length variable a where a = 1/s. Basic results such as the admissibility condition, conservation of energy, and the reconstruction theorem are proven in this context. After reviewing some motivation for the usual Fourier power spectrum, a definition is given for the wavelet power spectrum. This `spectral density` is then intepreted in the context of spectral estimation theory. Parseval`s theorem for Wavelets then leads naturally to the Wavelet Cross Spectrum, Wavelet Cospectrum, and Wavelet Quadrature Spectrum. Wavelet Transforms are then applied in Part III to the analysis of atmospheric turbulence. Data collected over the ocean is examined in the wavelet transform domain for underlying structure. A brief overview of atmospheric turbulence is provided. Then the overall method of applying Wavelet Transform techniques to time series data is described. A trace study is included, showing some of the aspects of choosing the computational algorithm, and selection of a specific analyzing wavelet. A model for generating synthetic turbulence data is developed, and seen to yield useful results in comparing with real data for structural transitions. Results from the theory of Wavelet Spectral Estimation and Wavelength Cross-Transforms are applied to studying the momentum transport and the heat flux.
Collocation and Pattern Recognition Effects on System Failure Remediation
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Press, Hayes N.
2007-01-01
Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.
Multi-quadric collocation model of horizontal crustal movement
NASA Astrophysics Data System (ADS)
Chen, G.; Zeng, A. M.; Ming, F.; Jing, Y. F.
2015-11-01
To establish the horizontal crustal movement velocity field of the Chinese mainland, a Hardy multi-quadric fitting model and collocation are usually used, but the kernel function, nodes, and smoothing factor are difficult to determine in the Hardy function interpolation, and in the collocation model the covariance function of the stochastic signal must be carefully constructed. In this paper, a new combined estimation method for establishing the velocity field, based on collocation and multi-quadric equation interpolation, is presented. The crustal movement estimation simultaneously takes into consideration an Euler vector as the crustal movement trend and the local distortions as the stochastic signals, and a kernel function of the multi-quadric fitting model substitutes for the covariance function of collocation. The velocities of a set of 1070 reference stations were obtained from the Crustal Movement Observation Network of China (CMONOC), and the corresponding velocity field established using the new combined estimation method. A total of 85 reference stations were used as check points, and the precision in the north and east directions was 1.25 and 0.80 mm yr-1, respectively. The result obtained by the new method corresponds with the collocation method and multi-quadric interpolation without requiring the covariance equation for the signals.
Wavelet Approach for Operational Gamma Spectral Peak Detection - Preliminary Assessment
,
2012-02-01
Gamma spectroscopy for radionuclide identifications typically involves locating spectral peaks and matching the spectral peaks with known nuclides in the knowledge base or database. Wavelet analysis, due to its ability for fitting localized features, offers the potential for automatic detection of spectral peaks. Past studies of wavelet technologies for gamma spectra analysis essentially focused on direct fitting of raw gamma spectra. Although most of those studies demonstrated the potentials of peak detection using wavelets, they often failed to produce new benefits to operational adaptations for radiological surveys. This work presents a different approach with the operational objective being to detect only the nuclides that do not exist in the environment (anomalous nuclides). With this operational objective, the raw-count spectrum collected by a detector is first converted to a count-rate spectrum and is then followed by background subtraction prior to wavelet analysis. The experimental results suggest that this preprocess is independent of detector type and background radiation, and is capable of improving the peak detection rates using wavelets. This process broadens the doors for a practical adaptation of wavelet technologies for gamma spectral surveying devices.
Three-dimensional compression scheme based on wavelet transform
NASA Astrophysics Data System (ADS)
Yang, Wu; Xu, Hui; Liao, Mengyang
1999-03-01
In this paper, a 3D compression method based on separable wavelet transform is discussed in detail. The most commonly used digital modalities generate multiple slices in a single examination, which are normally anatomically or physiologically correlated to each other. 3D wavelet compression methods can achieve more efficient compression by exploring the correlation between slices. The first step is based on a separable 3D wavelet transform. Considering the difference between pixel distances within a slice and those between slices, one biorthogonal Antoninin filter bank is applied within 2D slices and a second biorthogonal Villa4 filter bank on the slice direction. Then, S+P transform is applied in the low-resolution wavelet components and an optimal quantizer is presented after analysis of the quantization noise. We use an optimal bit allocation algorithm, which, instead of eliminating the coefficients of high-resolution components in smooth areas, minimizes the system reconstruction distortion at a given bit-rate. Finally, to remain high coding efficiency and adapt to different properties of each component, a comprehensive entropy coding method is proposed, in which arithmetic coding method is applied in high-resolution components and adaptive Huffman coding method in low-resolution components. Our experimental results are evaluated by several image measures and our 3D wavelet compression scheme is proved to be more efficient than 2D wavelet compression.
Wavelet differential neural network observer.
Chairez, Isaac
2009-09-01
State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown. PMID:19674951
Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles
NASA Astrophysics Data System (ADS)
Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.
2013-08-01
The uncertainty of important atmospheric parameters is a key factor for assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical points of the uncertainty budget is related to the collocation mismatch in space and time among different observations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or LIDAR. In this paper we consider a statistical modelling approach to understand at which extent collocation uncertainty is related to environmental factors, height and distance between the trajectories. To do this we introduce a new statistical approach, based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows us a natural definition of uncertainty profiles. Moreover, using this modelling approach, a five-folded uncertainty decomposition is proposed. Eventually, the HFR approach is illustrated by the collocation uncertainty analysis of relative humidity from two stations involved in GCOS reference upper-air network (GRUAN).
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
Comparison of Implicit Collocation Methods for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)
2001-01-01
We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.
Wavelets on Planar Tesselations
Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.
2000-02-25
We present a new technique for progressive approximation and compression of polygonal objects in images. Our technique uses local parameterizations defined by meshes of convex polygons in the plane. We generalize a tensor product wavelet transform to polygonal domains to perform multiresolution analysis and compression of image regions. The advantage of our technique over conventional wavelet methods is that the domain is an arbitrary tessellation rather than, for example, a uniform rectilinear grid. We expect that this technique has many applications image compression, progressive transmission, radiosity, virtual reality, and image morphing.
Electromagnetic spatial coherence wavelets.
Castaneda, Roman; Garcia-Sucerquia, Jorge
2006-01-01
The recently introduced concept of spatial coherence wavelets is generalized to describe the propagation of electromagnetic fields in the free space. For this aim, the spatial coherence wavelet tensor is introduced as an elementary amount, in terms of which the formerly known quantities for this domain can be expressed. It allows for the analysis of the relationship between the spatial coherence properties and the polarization state of the electromagnetic wave. This approach is completely consistent with the recently introduced unified theory of coherence and polarization for random electromagnetic beams, but it provides further insight about the causal relationship between the polarization states at different planes along the propagation path. PMID:16478063
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Jakeman, John D.; Narayan, Akil; Xiu, Dongbin
2013-06-01
We propose a multi-element stochastic collocation method that can be applied in high-dimensional parameter space for functions with discontinuities lying along manifolds of general geometries. The key feature of the method is that the parameter space is decomposed into multiple elements defined by the discontinuities and thus only the minimal number of elements are utilized. On each of the resulting elements the function is smooth and can be approximated using high-order methods with fast convergence properties. The decomposition strategy is in direct contrast to the traditional multi-element approaches which define the sub-domains by repeated splitting of the axes in the parameter space. Such methods are more prone to the curse-of-dimensionality because of the fast growth of the number of elements caused by the axis based splitting. The present method is a two-step approach. Firstly a discontinuity detector is used to partition parameter space into disjoint elements in each of which the function is smooth. The detector uses an efficient combination of the high-order polynomial annihilation technique along with adaptive sparse grids, and this allows resolution of general discontinuities with a smaller number of points when the discontinuity manifold is low-dimensional. After partitioning, an adaptive technique based on the least orthogonal interpolant is used to construct a generalized Polynomial Chaos surrogate on each element. The adaptive technique reuses all information from the partitioning and is variance-suppressing. We present numerous numerical examples that illustrate the accuracy, efficiency, and generality of the method. When compared against standard locally-adaptive sparse grid methods, the present method uses many fewer number of collocation samples and is more accurate.
Collocational Strategies of Arab Learners of English: A Study in Lexical Semantics.
ERIC Educational Resources Information Center
Muhammad, Raji Zughoul; Abdul-Fattah, Hussein S.
Arab learners of English encounter a serious problem with collocational sequences. The present study purports to determine the extent to which university English language majors can use English collocations properly. A two-form translation test of 16 Arabic collocations was administered to both graduate and undergraduate students of English. The…
L2 Learner Production and Processing of Collocation: A Multi-Study Perspective
ERIC Educational Resources Information Center
Siyanova, Anna; Schmitt, Norbert
2008-01-01
This article presents a series of studies focusing on L2 production and processing of adjective-noun collocations (e.g., "social services"). In Study 1, 810 adjective-noun collocations were extracted from 31 essays written by Russian learners of English. About half of these collocations appeared frequently in the British National Corpus (BNC);…
An Exploratory Study of Collocational Use by ESL Students--A Task Based Approach
ERIC Educational Resources Information Center
Fan, May
2009-01-01
Collocation is an aspect of language generally considered arbitrary by nature and problematic to L2 learners who need collocational competence for effective communication. This study attempts, from the perspective of L2 learners, to have a deeper understanding of collocational use and some of the problems involved, by adopting a task based…
Redefining Creativity--Analyzing Definitions, Collocations, and Consequences
ERIC Educational Resources Information Center
Kampylis, Panagiotis G.; Valtanen, Juri
2010-01-01
How holistically is human creativity defined, investigated, and understood? Until recently, most scientific research on creativity has focused on its positive side. However, creativity might not only be a desirable resource but also be a potential threat. In order to redefine creativity we need to analyze and understand definitions, collocations,…
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation
Ismail, M. S.
2010-09-30
The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.
Recent advances in (soil moisture) triple collocation analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....
Beyond triple collocation: Applications to satellite soil moisture
Technology Transfer Automated Retrieval System (TEKTRAN)
Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...
The Effects of Vocabulary Learning on Collocation and Meaning
ERIC Educational Resources Information Center
Webb, Stuart; Kagimoto, Eve
2009-01-01
This study investigates the effects of receptive and productive vocabulary tasks on learning collocation and meaning. Japanese English as a foreign language students learned target words in three glossed sentences and in a cloze task. To determine the effects of the treatments, four tests were used to measure receptive and productive knowledge of…
Evaluation of assumptions in soil moisture triple collocation analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...
Beyond Single Words: The Most Frequent Collocations in Spoken English
ERIC Educational Resources Information Center
Shin, Dongkwang; Nation, Paul
2008-01-01
This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…
Yun, Jong Pil; Jeon, Yong-Ju; Choi, Doo-chul; Kim, Sang Woo
2012-05-01
We propose a new defect detection algorithm for scale-covered steel wire rods. The algorithm incorporates an adaptive wavelet filter that is designed on the basis of lattice parameterization of orthogonal wavelet bases. This approach offers the opportunity to design orthogonal wavelet filters via optimization methods. To improve the performance and the flexibility of wavelet design, we propose the use of the undecimated discrete wavelet transform, and separate design of column and row wavelet filters but with a common cost function. The coefficients of the wavelet filters are optimized by the so-called univariate dynamic encoding algorithm for searches (uDEAS), which searches the minimum value of a cost function designed to maximize the energy difference between defects and background noise. Moreover, for improved detection accuracy, we propose an enhanced double-threshold method. Experimental results for steel wire rod surface images obtained from actual steel production lines show that the proposed algorithm is effective. PMID:22561939
ERIC Educational Resources Information Center
Yamashita, Junko; Jiang, Nan
2010-01-01
This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese…
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Discrete wavelet analysis of power system transients
Wilkinson, W.A.; Cox, M.D.
1996-11-01
Wavelet analysis is a new method for studying power system transients. Through wavelet analysis, transients are decomposed into a series of wavelet components, each of which is a time-domain signal that covers a specific octave frequency band. This paper presents the basic ideas of discrete wavelet analysis. A variety of actual and simulated transient signals are then analyzed using the discrete wavelet transform that help demonstrate the power of wavelet analysis.
Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis
NASA Astrophysics Data System (ADS)
Qin, Yi; Xing, Jianfeng; Mao, Yongfang
2016-08-01
Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.
Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.
2012-07-17
The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.
NASA Astrophysics Data System (ADS)
Zahra, Noor e.; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.
2012-07-01
The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.
NASA Technical Reports Server (NTRS)
Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.
1994-01-01
Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Curran, M.C.
1992-01-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Curran, M.C.
1992-11-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
A Two-Timescale Discretization Scheme for Collocation
NASA Technical Reports Server (NTRS)
Desai, Prasun; Conway, Bruce A.
2004-01-01
The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.
Collocation methods for distillation design. 2: Applications for distillation
Huss, R.S.; Westerberg, A.W.
1996-05-01
The authors present applications for a collocation method for modeling distillation columns that they developed in a companion paper. They discuss implementation of the model, including discussion of the ASCEND (Advanced System for Computations in ENgineering Design) system, which enables one to create complex models with simple building blocks and interactively learn to solve them. They first investigate applying the model to compute minimum reflux for a given separation task, exactly solving nonsharp and approximately solving sharp split minimum reflux problems. They next illustrate the use of the collocation model to optimize the design a single column capable of carrying out a prescribed set of separation tasks. The optimization picks the best column diameter and total number of trays. It also picks the feed tray for each of the prescribed separations.
Collocation and Least Residuals Method and Its Applications
NASA Astrophysics Data System (ADS)
Shapeev, Vasily
2016-02-01
The collocation and least residuals (CLR) method combines the methods of collocations (CM) and least residuals. Unlike the CM, in the CLR method an approximate solution of the problem is found from an overdetermined system of linear algebraic equations (SLAE). The solution of this system is sought under the requirement of minimizing a functional involving the residuals of all its equations. On the one hand, this added complication of the numerical algorithm expands the capabilities of the CM for solving boundary value problems with singularities. On the other hand, the CLR method inherits to a considerable extent some convenient features of the CM. In the present paper, the CLR capabilities are illustrated on benchmark problems for 2D and 3D Navier-Stokes equations, the modeling of the laser welding of metal plates of similar and different metals, problems investigating strength of loaded parts made of composite materials, boundary-value problems for hyperbolic equations.
Radiation energy budget studies using collocated AVHRR and ERBE observations
Ackerman, S.A.; Inoue, Toshiro
1994-03-01
Changes in the energy balance at the top of the atmosphere are specified as a function of atmospheric and surface properties using observations from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner. By collocating the observations from the two instruments, flown on NOAA-9, the authors take advantage of the remote-sensing capabilities of each instrument. The AVHRR spectral channels were selected based on regions that are strongly transparent to clear sky conditions and are therefore useful for characterizing both surface and cloud-top conditions. The ERBE instruments make broadband observations that are important for climate studies. The approach of collocating these observations in time and space is used to study the radiative energy budget of three geographic regions: oceanic, savanna, and desert. 25 refs., 8 figs.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V., II
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
Market turning points forecasting using wavelet analysis
NASA Astrophysics Data System (ADS)
Bai, Limiao; Yan, Sen; Zheng, Xiaolian; Chen, Ben M.
2015-11-01
Based on the system adaptation framework we previously proposed, a frequency domain based model is developed in this paper to forecast the major turning points of stock markets. This system adaptation framework has its internal model and adaptive filter to capture the slow and fast dynamics of the market, respectively. The residue of the internal model is found to contain rich information about the market cycles. In order to extract and restore its informative frequency components, we use wavelet multi-resolution analysis with time-varying parameters to decompose this internal residue. An empirical index is then proposed based on the recovered signals to forecast the market turning points. This index is successfully applied to US, UK and China markets, where all major turning points are well forecasted.
Domain decomposition preconditioners for the spectral collocation method
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio; Sacchilandriani, Giovanni
1988-01-01
Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.
Multiscale quantum propagation using compact-support wavelets in space and time
Wang Haixiang; Acevedo, Ramiro; Molle, Heather; Mackey, Jeffrey L.; Kinsey, James L.; Johnson, Bruce R.
2004-10-22
Orthogonal compact-support Daubechies wavelets are employed as bases for both space and time variables in the solution of the time-dependent Schroedinger equation. Initial value conditions are enforced using special early-time wavelets analogous to edge wavelets used in boundary-value problems. It is shown that the quantum equations may be solved directly and accurately in the discrete wavelet representation, an important finding for the eventual goal of highly adaptive multiresolution Schroedinger equation solvers. While the temporal part of the basis is not sharp in either time or frequency, the Chebyshev method used for pure time-domain propagations is adapted to use in the mixed domain and is able to take advantage of Hamiltonian matrix sparseness. The orthogonal separation into different time scales is determined theoretically to persist throughout the evolution and is demonstrated numerically in a partially adaptive treatment of scattering from an asymmetric Eckart barrier.
Mars Mission Optimization Based on Collocation of Resources
NASA Technical Reports Server (NTRS)
Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.
2003-01-01
This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
ERIC Educational Resources Information Center
Walker, Crayton Phillip
2011-01-01
In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining…
Finite element-wavelet hybrid algorithm for atmospheric tomography.
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2014-03-01
Reconstruction of the refractive index fluctuations in the atmosphere, or atmospheric tomography, is an underlying problem of many next generation adaptive optics (AO) systems, such as the multiconjugate adaptive optics or multiobject adaptive optics (MOAO). The dimension of the problem for the extremely large telescopes, such as the European Extremely Large Telescope (E-ELT), suggests the use of iterative schemes as an alternative to the matrix-vector multiply (MVM) methods. Recently, an algorithm based on the wavelet representation of the turbulence has been introduced in [Inverse Probl.29, 085003 (2013)] by the authors to solve the atmospheric tomography using the conjugate gradient iteration. The authors also developed an efficient frequency-dependent preconditioner for the wavelet method in a later work. In this paper we study the computational aspects of the wavelet algorithm. We introduce three new techniques, the dual domain discretization strategy, a scale-dependent preconditioner, and a ground layer multiscale method, to derive a method that is globally O(n), parallelizable, and compact with respect to memory. We present the computational cost estimates and compare the theoretical numerical performance of the resulting finite element-wavelet hybrid algorithm with the MVM. The quality of the method is evaluated in terms of an MOAO simulation for the E-ELT on the European Southern Observatory (ESO) end-to-end simulation system OCTOPUS. The method is compared to the ESO version of the Fractal Iterative Method [Proc. SPIE7736, 77360X (2010)] in terms of quality. PMID:24690653
Data compression by wavelet transforms
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1992-01-01
A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.
Li, Jingsong; Yu, Benli; Fischer, Horst
2015-04-01
This paper presents a novel methodology-based discrete wavelet transform (DWT) and the choice of the optimal wavelet pairs to adaptively process tunable diode laser absorption spectroscopy (TDLAS) spectra for quantitative analysis, such as molecular spectroscopy and trace gas detection. The proposed methodology aims to construct an optimal calibration model for a TDLAS spectrum, regardless of its background structural characteristics, thus facilitating the application of TDLAS as a powerful tool for analytical chemistry. The performance of the proposed method is verified using analysis of both synthetic and observed signals, characterized with different noise levels and baseline drift. In terms of fitting precision and signal-to-noise ratio, both have been improved significantly using the proposed method. PMID:25741689
Spectral Laplace-Beltrami wavelets with applications in medical images.
Tan, Mingzhen; Qiu, Anqi
2015-05-01
The spectral graph wavelet transform (SGWT) has recently been developed to compute wavelet transforms of functions defined on non-Euclidean spaces such as graphs. By capitalizing on the established framework of the SGWT, we adopt a fast and efficient computation of a discretized Laplace-Beltrami (LB) operator that allows its extension from arbitrary graphs to differentiable and closed 2-D manifolds (smooth surfaces embedded in the 3-D Euclidean space). This particular class of manifolds are widely used in bioimaging to characterize the morphology of cells, tissues, and organs. They are often discretized into triangular meshes, providing additional geometric information apart from simple nodes and weighted connections in graphs. In comparison with the SGWT, the wavelet bases constructed with the LB operator are spatially localized with a more uniform "spread" with respect to underlying curvature of the surface. In our experiments, we first use synthetic data to show that traditional applications of wavelets in smoothing and edge detectio can be done using the wavelet bases constructed with the LB operator. Second, we show that multi-resolutional capabilities of the proposed framework are applicable in the classification of Alzheimer's patients with normal subjects using hippocampal shapes. Wavelet transforms of the hippocampal shape deformations at finer resolutions registered higher sensitivity (96%) and specificity (90%) than the classification results obtained from the direct usage of hippocampal shape deformations. In addition, the Laplace-Beltrami method requires consistently a smaller number of principal components (to retain a fixed variance) at higher resolution as compared to the binary and weighted graph Laplacians, demonstrating the potential of the wavelet bases in adapting to the geometry of the underlying manifold. PMID:25343758
NASA Technical Reports Server (NTRS)
Jameson, Leland
1996-01-01
Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.
Wavelet compression of medical imagery.
Reiter, E
1996-01-01
Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355
A generalized wavelet extrema representation
Lu, Jian; Lades, M.
1995-10-01
The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.
Using wavelets to solve the Burgers equation: A comparative study
Schult, R.L.; Wyld, H.W. )
1992-12-15
The Burgers equation is solved for Reynolds numbers [approx lt]8000 in a representation using coarse-scale scaling functions and a subset of the wavelets at finer scales of resolution. Situations are studied in which the solution develops a shocklike discontinuity. Extra wavelets are kept for several levels of higher resolution in the neighborhood of this discontinuity. Algorithms are presented for the calculation of matrix elements of first- and second-derivative operators and a useful product operation in this truncated wavelet basis. The time evolution of the system is followed using an implicit time-stepping computer code. An adaptive algorithm is presented which allows the code to follow a moving shock front in a system with periodic boundary conditions.
A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring
Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian
2007-01-01
Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.
Fan, Hong-Yi; Lu, Hai-Liang
2007-03-01
The Einstein-Podolsky-Rosen entangled state representation is applied to studying the admissibility condition of mother wavelets for complex wavelet transforms, which leads to a family of new mother wavelets. Mother wavelets thus are classified as the Hermite-Gaussian type for real wavelet transforms and the Laguerre-Gaussian type for the complex case. PMID:17392919
Wavelet periodicity detection algorithms
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Pfander, Goetz E.
1998-10-01
This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.
Wavelets and spacetime squeeze
NASA Technical Reports Server (NTRS)
Han, D.; Kim, Y. S.; Noz, Marilyn E.
1993-01-01
It is shown that the wavelet is the natural language for the Lorentz covariant description of localized light waves. A model for covariant superposition is constructed for light waves with different frequencies. It is therefore possible to construct a wave function for light waves carrying a covariant probability interpretation. It is shown that the time-energy uncertainty relation (Delta(t))(Delta(w)) is approximately 1 for light waves is a Lorentz-invariant relation. The connection between photons and localized light waves is examined critically.
An Introduction to Wavelet Theory and Analysis
Miner, N.E.
1998-10-01
This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.
An optimal wavelet for the detection of surface waves in Marine Sediments
NASA Astrophysics Data System (ADS)
Kritski, A.; Vincent, A. P.; Yuen, D. A.
2004-12-01
We study seismic surface wave propagation in stratified shallow marine sediments media. Our goal is to predict dynamic (shear velocity, attenuation) and physical properties (stiffness, density) of sediments from seismoacoustic records of surface waves propagating along the water-seabed interface. To estimate and invert propagational parameters of surface waves (group and phase velocity) into shear velocity as a function of distance and depth we are using a multiscale wavelet cross-correlation technique. Standard wavelet transform series has indeed proven very useful for imaging different surface waves modes. However, to achieve a better resolution of each mode imaging we need to develop a new wavelet transform that includes optimality and adaptivity, based on the seismic data itself. Our main tool to develop such an optimal wavelet is the Karhunen-Loeve decomposition of the data series. This requires two steps: first, we calculate set of covariance matrices from the pairs of time series. Second, we estimate the corresponding eigenvalues and eigenfunctions. The calculated eigenfunctions have to be further regularized to obtain a new wavelet series. This new eigenfunctions basis has an optimal convergence in the sense of the least squares. It is sufficient to take a small number of the above set of eigenfunctions. They are naturally adapted to surface waves modes propagation in terms of scales values: time and periods (frequencies). Our approach makes it possible to decompose highly correlated reference data series into eigenvectors and then to use it to decompose field data records in the frequency and time domains with significant improvement of the image quality. We have processed different seismic records with surface waves. The results were compared with the wavelet analysis using standard wavelet kernel ('Morlet', 'Gaussian', 'Mexican hat'). We show that our new developed adaptive wavelet discriminates better between different surface wave modes propagating
Alwan, Aravind; Aluru, N.R.
2013-12-15
This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.
Wavelet networks for face processing
NASA Astrophysics Data System (ADS)
Krüger, V.; Sommer, G.
2002-06-01
Wavelet networks (WNs) were introduced in 1992 as a combination of artificial neural radial basis function (RBF) networks and wavelet decomposition. Since then, however, WNs have received only a little attention. We believe that the potential of WNs has been generally underestimated. WNs have the advantage that the wavelet coefficients are directly related to the image data through the wavelet transform. In addition, the parameters of the wavelets in the WNs are subject to optimization, which results in a direct relation between the represented function and the optimized wavelets, leading to considerable data reduction (thus making subsequent algorithms much more efficient) as well as to wavelets that can be used as an optimized filter bank. In our study we analyze some WN properties and highlight their advantages for object representation purposes. We then present a series of results of experiments in which we used WNs for face tracking. We exploit the efficiency that is due to data reduction for face recognition and face-pose estimation by applying the optimized-filter-bank principle of the WNs.
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
Collocation method for chatter avoidance of general turning operations
NASA Astrophysics Data System (ADS)
Urbicain, G.; Olvera, D.; Fernández, A.; Rodríguez, A.; López de Lacalle, L. N.
2012-04-01
An accurate prediction of the dynamic stability of a cutting system involves the implementation of tool geometry and cutting conditions on any model used for such purpose. This study presents a dynamic cutting force model based on the collocation method by Chebyshev polynomials taking advantage from its ability to consider tool geometry and cutting parameters. In the paper, a simple 1DOF model is used to forecast chatter vibrations due to the workpiece and tool, which are distinguished in separate sections. The proposed model is verified positively against experimental dynamic tests.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Resnikoff, H.L. )
1993-01-01
The theory of compactly supported wavelets is now 4 yr old. In that short period, it has stimulated significant research in pure mathematics; has been the source of new numerical methods for the solution of nonlinear partial differential equations, including Navier-Stokes; and has been applied to digital signal-processing problems, ranging from signal detection and classification to signal compression for speech, audio, images, seismic signals, and sonar. Wavelet channel coding has even been proposed for code division multiple access digital telephony. In each of these applications, prototype wavelet solutions have proved to be competitive with established methods, and in many cases they are already superior.
Peak finding using biorthogonal wavelets
Tan, C.Y.
2000-02-01
The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.
The wavelet/scalar quantization compression standard for digital fingerprint images
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Collocating satellite-based radar and radiometer measurements - methodology and usage examples.
NASA Astrophysics Data System (ADS)
Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.
2010-05-01
Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat. First, a simple method is presented to obtain those collocations. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to its centrepoint. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relationship between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations are available for public use.
Collocating satellite-based radar and radiometer measurements - methodology and usage examples
NASA Astrophysics Data System (ADS)
Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.
2010-02-01
Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat CPR. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.
Collocating satellite-based radar and radiometer measurements - methodology and usage examples
NASA Astrophysics Data System (ADS)
Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.
2010-06-01
Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the Cloud Profiling Radar (CPR) on-board CloudSat. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.
Birdsong Denoising Using Wavelets.
Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal
2016-01-01
Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391
Birdsong Denoising Using Wavelets
Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal
2016-01-01
Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391
Wavelet theory and its applications
Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.
1996-07-01
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.