Science.gov

Sample records for adaptive wavelet collocation

  1. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  2. Visualizing 3D Turbulence On Temporally Adaptive Wavelet Collocation Grids

    NASA Astrophysics Data System (ADS)

    Goldstein, D. E.; Kadlec, B. J.; Yuen, D. A.; Erlebacher, G.

    2005-12-01

    Today there is an explosion in data from high-resolution computations of nonlinear phenomena in many fields, including the geo- and environmental sciences. The efficient storage and subsequent visualization of these large data sets is a trade off in storage costs versus data quality. New dynamically adaptive simulation methodologies promise significant computational cost savings and have the added benefit of producing results on adapted grids that significantly reduce storage and data manipulation costs. Yet, with these adaptive simulation methodologies come new challenges in the visualization of temporally adaptive data sets. In this work turbulence data sets from Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) are visualized with the open source tool ParaView, as a challenging case study. SCALES simulations use a temporally adaptive collocation grid defined by wavelet threshold filtering to resolve the most energetic coherent structures in a turbulence field. A subgrid scale model is used to account for the effect of unresolved subgrid scale modes. The results from the SCALES simulations are saved on a thresholded dyadic wavelet collocation grid, which by its nature does not include cell information. Paraview is an open source visualization package developed by KitWare(tm) that is based on the widely used VTK graphics toolkit. The efficient generation of cell information, required with current ParaView data formats, is explored using custom algorithms and VTK toolkit routines. Adaptive 3d visualizations using isosurfaces and volume visualizations are compared with non-adaptive visualizations. To explore the localized multiscale structures in the turbulent data sets the wavelet coefficients are also visualized allowing visualization of energy contained in local physical regions as well as in local wave number space.

  3. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  4. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  5. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  6. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.

  7. A Haar wavelet collocation method for coupled nonlinear Schrödinger-KdV equations

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer; Esen, Alaattin; Bulut, Fatih

    2016-04-01

    In this paper, to obtain accurate numerical solutions of coupled nonlinear Schrödinger-Korteweg-de Vries (KdV) equations a Haar wavelet collocation method is proposed. An explicit time stepping scheme is used for discretization of time derivatives and nonlinear terms that appeared in the equations are linearized by a linearization technique and space derivatives are discretized by Haar wavelets. In order to test the accuracy and reliability of the proposed method L2, L∞ error norms and conserved quantities are used. Also obtained results are compared with previous ones obtained by finite element method, Crank-Nicolson method and radial basis function meshless methods. Error analysis of Haar wavelets is also given.

  8. Nonlinear adaptive wavelet analysis of electrocardiogram signals

    NASA Astrophysics Data System (ADS)

    Yang, H.; Bukkapatnam, S. T.; Komanduri, R.

    2007-08-01

    Wavelet representation can provide an effective time-frequency analysis for nonstationary signals, such as the electrocardiogram (EKG) signals, which contain both steady and transient parts. In recent years, wavelet representation has been emerging as a powerful time-frequency tool for the analysis and measurement of EKG signals. The EKG signals contain recurring, near-periodic patterns of P , QRS , T , and U waveforms, each of which can have multiple manifestations. Identification and extraction of a compact set of features from these patterns is critical for effective detection and diagnosis of various disorders. This paper presents an approach to extract a fiducial pattern of EKG based on the consideration of the underlying nonlinear dynamics. The pattern, in a nutshell, is a combination of eigenfunctions of the ensembles created from a Poincare section of EKG dynamics. The adaptation of wavelet functions to the fiducial pattern thus extracted yields two orders of magnitude (some 95%) more compact representation (measured in terms of Shannon signal entropy). Such a compact representation can facilitate in the extraction of features that are less sensitive to extraneous noise and other variations. The adaptive wavelet can also lead to more efficient algorithms for beat detection and QRS cancellation as well as for the extraction of multiple classical EKG signal events, such as widths of QRS complexes and QT intervals.

  9. Adaptive wavelet Wiener filtering of ECG signals.

    PubMed

    Smital, Lukáš; Vítek, Martin; Kozumplík, Jiří; Provazník, Ivo

    2013-02-01

    In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter.

  10. Adaptive wavelet Wiener filtering of ECG signals.

    PubMed

    Smital, Lukáš; Vítek, Martin; Kozumplík, Jiří; Provazník, Ivo

    2013-02-01

    In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter. PMID:23192472

  11. Adaptive wavelet simulation of global ocean dynamics

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-07-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  12. A New Adaptive Mother Wavelet for Electromagnetic Transient Analysis

    NASA Astrophysics Data System (ADS)

    Guillén, Daniel; Idárraga-Ospina, Gina; Cortes, Camilo

    2016-01-01

    Wavelet Transform (WT) is a powerful technique of signal processing, its applications in power systems have been increasing to evaluate power system conditions, such as faults, switching transients, power quality issues, among others. Electromagnetic transients in power systems are due to changes in the network configuration, producing non-periodic signals, which have to be identified to avoid power outages in normal operation or transient conditions. In this paper a methodology to develop a new adaptive mother wavelet for electromagnetic transient analysis is proposed. Classification is carried out with an innovative technique based on adaptive wavelets, where filter bank coefficients will be adapted until a discriminant criterion is optimized. Then, its corresponding filter coefficients will be used to get the new mother wavelet, named wavelet ET, which allowed to identify and to distinguish the high frequency information produced by different electromagnetic transients.

  13. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  14. An adaptive morphological gradient lifting wavelet for detecting bearing defects

    NASA Astrophysics Data System (ADS)

    Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng

    2012-05-01

    This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.

  15. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  16. Adaptive video compressed sampling in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi

    2016-07-01

    In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.

  17. Big data extraction with adaptive wavelet analysis (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Qu, Hongya; Chen, Genda; Ni, Yiqing

    2015-04-01

    Nondestructive evaluation and sensing technology have been increasingly applied to characterize material properties and detect local damage in structures. More often than not, they generate images or data strings that are difficult to see any physical features without novel data extraction techniques. In the literature, popular data analysis techniques include Short-time Fourier Transform, Wavelet Transform, and Hilbert Transform for time efficiency and adaptive recognition. In this study, a new data analysis technique is proposed and developed by introducing an adaptive central frequency of the continuous Morlet wavelet transform so that both high frequency and time resolution can be maintained in a time-frequency window of interest. The new analysis technique is referred to as Adaptive Wavelet Analysis (AWA). This paper will be organized in several sections. In the first section, finite time-frequency resolution limitations in the traditional wavelet transform are introduced. Such limitations would greatly distort the transformed signals with a significant frequency variation with time. In the second section, Short Time Wavelet Transform (STWT), similar to Short Time Fourier Transform (STFT), is defined and developed to overcome such shortcoming of the traditional wavelet transform. In the third section, by utilizing the STWT and a time-variant central frequency of the Morlet wavelet, AWA can adapt the time-frequency resolution requirement to the signal variation over time. Finally, the advantage of the proposed AWA is demonstrated in Section 4 with a ground penetrating radar (GPR) image from a bridge deck, an analytical chirp signal with a large range sinusoidal frequency change over time, the train-induced acceleration responses of the Tsing-Ma Suspension Bridge in Hong Kong, China. The performance of the proposed AWA will be compared with the STFT and traditional wavelet transform.

  18. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    NASA Astrophysics Data System (ADS)

    Man, Jun; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a sufficiently large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos expansion (PCE) to represent and propagate the uncertainties in parameters and states. However, PCKF suffers from the so-called "curse of dimensionality". Its computational cost increases drastically with the increasing number of parameters and system nonlinearity. Furthermore, PCKF may fail to provide accurate estimations due to the joint updating scheme for strongly nonlinear models. Motivated by recent developments in uncertainty quantification and EnKF, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected at each assimilation step; the "restart" scheme is utilized to eliminate the inconsistency between updated model parameters and states variables. The performance of RAPCKF is systematically tested with numerical cases of unsaturated flow models. It is shown that the adaptive approach and restart scheme can significantly improve the performance of PCKF. Moreover, RAPCKF has been demonstrated to be more efficient than EnKF with the same computational cost.

  19. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  20. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  1. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  2. A stable interface element scheme for the p-adaptive lifting collocation penalty formulation

    NASA Astrophysics Data System (ADS)

    Cagnone, J. S.; Nadarajah, S. K.

    2012-02-01

    This paper presents a procedure for adaptive polynomial refinement in the context of the lifting collocation penalty (LCP) formulation. The LCP scheme is a high-order unstructured discretization method unifying the discontinuous Galerkin, spectral volume, and spectral difference schemes in single differential formulation. Due to the differential nature of the scheme, the treatment of inter-cell fluxes for spatially varying polynomial approximations is not straightforward. Specially designed elements are proposed to tackle non-conforming polynomial approximations. These elements are constructed such that a conforming interface between polynomial approximations of different degrees is recovered. The stability and conservation properties of the scheme are analyzed and various inviscid compressible flow calculations are performed to demonstrate the potential of the proposed approach.

  3. Solving Chemical Master Equations by an Adaptive Wavelet Method

    SciTech Connect

    Jahnke, Tobias; Galan, Steffen

    2008-09-01

    Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.

  4. Vibration suppression in cutting tools using collocated piezoelectric sensors/actuators with an adaptive control algorithm

    SciTech Connect

    Radecki, Peter P; Farinholt, Kevin M; Park, Gyuhae; Bement, Matthew T

    2008-01-01

    The machining process is very important in many engineering applications. In high precision machining, surface finish is strongly correlated with vibrations and the dynamic interactions between the part and the cutting tool. Parameters affecting these vibrations and dynamic interactions, such as spindle speed, cut depth, feed rate, and the part's material properties can vary in real-time, resulting in unexpected or undesirable effects on the surface finish of the machining product. The focus of this research is the development of an improved machining process through the use of active vibration damping. The tool holder employs a high bandwidth piezoelectric actuator with an adaptive positive position feedback control algorithm for vibration and chatter suppression. In addition, instead of using external sensors, the proposed approach investigates the use of a collocated piezoelectric sensor for measuring the dynamic responses from machining processes. The performance of this method is evaluated by comparing the surface finishes obtained with active vibration control versus baseline uncontrolled cuts. Considerable improvement in surface finish (up to 50%) was observed for applications in modern day machining.

  5. [Adaptive de-noising of ECG signal based on stationary wavelet transform].

    PubMed

    Dong, Hong-sheng; Zhang, Ai-hua; Hao, Xiao-hong

    2009-03-01

    According to the limitations of wavelet threshold in de-noising method, we approached a combining algorithm of the stationary wavelet transform with adaptive filter. The stationary wavelet transformation can suppress Gibbs phenomena in traditional DWT effectively, and adaptive filter is introduced at the high scale wavelet coefficient of the stationary wavelet transformation. It would remove baseline wander and keep the shape of low frequency and low amplitude P wave, T wave and ST segment wave of ECG signal well. That is important for analyzing ECG signal of other feature information.

  6. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  7. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation.

  8. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661

  9. Adaptive segmentation of wavelet transform coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Wasilewski, Piotr

    2000-04-01

    This paper presents video compression algorithm suitable for inexpensive real-time hardware implementation. This algorithm utilizes Discrete Wavelet Transform (DWT) with the new Adaptive Spatial Segmentation Algorithm (ASSA). The algorithm was designed to obtain better or similar decompressed video quality in compare to H.263 recommendation and MPEG standard using lower computational effort, especially at high compression rates. The algorithm was optimized for hardware implementation in low-cost Field Programmable Gate Array (FPGA) devices. The luminance and chrominance components of every frame are encoded with 3-level Wavelet Transform with biorthogonal filters bank. The low frequency subimage is encoded with an ADPCM algorithm. For the high frequency subimages the new Adaptive Spatial Segmentation Algorithm is applied. It divides images into rectangular blocks that may overlap each other. The width and height of the blocks are set independently. There are two kinds of blocks: Low Variance Blocks (LVB) and High Variance Blocks (HVB). The positions of the blocks and the values of the WT coefficients belonging to the HVB are encoded with the modified zero-tree algorithms. LVB are encoded with the mean value. Obtained results show that presented algorithm gives similar or better quality of decompressed images in compare to H.263, even up to 5 dB in PSNR measure.

  10. Adaptive wavelet detection of transients using the bootstrap

    NASA Astrophysics Data System (ADS)

    Hewer, Gary A.; Kuo, Wei; Peterson, Lawrence A.

    1996-03-01

    A Daubechies wavelet-based bootstrap detection strategy based on the research of Carmona was applied to a set of test signals. The detector was a function of the d-scales. The adaptive detection statistics were derived using Efron's bootstrap methodology, which relieved us from having to make parametric assumptions about the underlying noise and offered a method of overcoming the constraints of modeling the detector statistics. The test set of signals used to evaluate the Daubechies/bootstrap pulse detector were generated with a Hewlett-Packard Fast Agile Signal Simulator (FASS). These video pulses, with varying signal-to-noise ratios (SNRs), included unmodulated, linear chirp, and Barker phase-code modulations baseband (IF) video pulses mixed with additive white Gaussian noise. Simulated examples illustrating the bootstrap methodology are presented, along with a complete set of constant false alarm rate (CFAR) detection statistics for the test signals. The CFAR curves clearly show that the wavelet bootstrap can adaptively detect transient pulses at low SNRs.

  11. Adaptive directional wavelet transform based on directional prefiltering.

    PubMed

    Tanaka, Yuichi; Hasegawa, Madoka; Kato, Shigeo; Ikehara, Masaaki; Nguyen, Truong Q

    2010-04-01

    This paper proposes an efficient approach for adaptive directional wavelet transform (WT) based on directional prefiltering. Although the adaptive directional WT is able to transform an image along diagonal orientations as well as traditional horizontal and vertical directions, it sacrifices computation speed for good image coding performance. We present two efficient methods to find the best transform directions by prefiltering using 2-D filter bank or 1-D directional WT along two fixed directions. The proposed direction calculation methods achieve comparable image coding performance comparing to the conventional one with less complexity. Furthermore, transform direction data of the proposed method can be used for content-based image retrieval to increase retrieval ratio. PMID:20028625

  12. Adaptive Wavelet-Based Direct Numerical Simulations of Rayleigh-Taylor Instability

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott J.

    The compressible Rayleigh-Taylor instability (RTI) occurs when a fluid of low molar mass supports a fluid of higher molar mass against a gravity-like body force or in the presence of an accelerating front. Intrinsic to the problem are highly stratified background states, acoustic waves, and a wide range of physical scales. The objective of this thesis is to develop a specialized computational framework that addresses these challenges and to apply the advanced methodologies for direct numerical simulations of compressible RTI. Simulations are performed using the Parallel Adaptive Wavelet Collocation Method (PAWCM). Due to the physics-based adaptivity and direct error control of the method, PAWCM is ideal for resolving the wide range of scales present in RTI growth. Characteristics-based non-reflecting boundary conditions are developed for highly stratified systems to be used in conjunction with PAWCM. This combination allows for extremely long domains, which is necessary for observing the late time growth of compressible RTI. Initial conditions that minimize acoustic disturbances are also developed. The initialization is consistent with linear stability theory, where the background state consists of two diffusively mixed stratified fluids of differing molar masses. The compressibility effects on the departure from the linear growth, the onset of strong non-linear interactions, and the late-time behavior of the fluid structures are investigated. It is discovered that, for the thermal equilibrium case, the background stratification acts to suppress the instability growth when the molar mass difference is small. A reversal in this monotonic behavior is observed for large molar mass differences, where stratification enhances the bubble growth. Stratification also affects the vortex creation and the associated induced velocities. The enhancement and suppression of the RTI growth has important consequences for a detailed understanding of supernovae flame front

  13. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  14. Fast Fourier and Wavelet Transforms for Wavefront Reconstruction in Adaptive Optics

    SciTech Connect

    Dowla, F U; Brase, J M; Olivier, S S

    2000-07-28

    Wavefront reconstruction techniques using the least-squares estimators are computationally quite expensive. We compare wavelet and Fourier transforms techniques in addressing the computation issues of wavefront reconstruction in adaptive optics. It is shown that because the Fourier approach is not simply a numerical approximation technique unlike the wavelet method, the Fourier approach might have advantages in terms of numerical accuracy. However, strictly from a numerical computations viewpoint, the wavelet approximation method might have advantage in terms of speed. To optimize the wavelet method, a statistical study might be necessary to use the best basis functions or ''approximation tree.''

  15. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  16. Haar wavelet processor for adaptive on-line image compression

    NASA Astrophysics Data System (ADS)

    Diaz, F. Javier; Buron, Angel M.; Solana, Jose M.

    2005-06-01

    An image coding processing scheme based on a variant of the Haar Wavelet Transform that uses only addition and subtraction is presented. After computing the transform, the selection and coding of the coefficients is performed using a methodology optimized to attain the lowest hardware implementation complexity. Coefficients are sorted in groups according to the number of pixels used in their computing. The idea behind it is to use a different threshold for each group of coefficients; these thresholds are obtained recurrently from an initial one. Parameter values used to achieve the desired compression level are established "on-line", adapting their values to each image, which leads to an improvement in the quality obtained for a preset compression level. Despite its adaptive characteristic, the coding scheme presented leads to a hardware implementation of markedly low circuit complexity. The compression reached for images of 512x512 pixels (256 grey levels) is over 22:1 (~0.4 bits/pixel) with a rmse of 8-10%. An image processor (excluding memory) prototype designed to compute the proposed transform has been implemented using FPGA chips. The processor for images of 256x256 pixels has been implemented using only one general-purpose low-cost FPGA chip, thus proving the design reliability and its relative simplicity.

  17. Morphology analysis of EKG R waves using wavelets with adaptive parameters derived from fuzzy logic

    NASA Astrophysics Data System (ADS)

    Caldwell, Max A.; Barrington, William W.; Miles, Richard R.

    1996-03-01

    Understanding of the EKG components P, QRS (R wave), and T is essential in recognizing cardiac disorders and arrhythmias. An estimation method is presented that models the R wave component of the EKG by adaptively computing wavelet parameters using fuzzy logic. The parameters are adaptively adjusted to minimize the difference between the original EKG waveform and the wavelet. The R wave estimate is derived from minimizing the combination of mean squared error (MSE), amplitude difference, spread difference, and shift difference. We show that the MSE in both non-noise and additive noise environment is less using an adaptive wavelet than a static wavelet. Research to date has focused on the R wave component of the EKG signal. Extensions of this method to model P and T waves are discussed.

  18. Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation

    NASA Astrophysics Data System (ADS)

    Lei, Sheau-Fang; Tung, Ying-Kai

    Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.

  19. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  20. Detailed resolution of the nonlinear Schrodinger equation using the full adaptive wavelet transform

    NASA Astrophysics Data System (ADS)

    Stedham, Mark A.; Banerjee, Partha P.

    2000-04-01

    The propagation of optical pulses in nonlinear optical fibers is described by the nonlinear Schrodinger (NLS) equation. This equation can generally be solved exactly using the inverse scattering method, or for more detailed analysis, through the use of numerical techniques. Perhaps the best known numerical technique for solving he NLS equation is the split-step Fourier method, which effects a solution by assuming that the dispersion and nonlinear effects act independently during pulse propagation along the fiber. In this paper we describe an alternative numerical solution to the NLS equation using an adaptive wavelet transform technique, done entirely in the wavelet domain. This technique differs form previous work involving wavelet solutions tithe NLS equation in that these previous works used a 'split-step wavelet' method in which the linear analysis was performed in the wavelet domain while the nonlinear portion was done in the space domain. Our method takes ful advantage of the set of wavelet coefficients, thus allowing the flexibility to investigate pulse propagation entirely in either the wavelet or the space domain. Additionally, this method is fully adaptive in that it is capable of accurately tracking steep gradients which may occur during the numerical simulation.

  1. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  2. A wavelet approach to binary blackholes with asynchronous multitasking

    NASA Astrophysics Data System (ADS)

    Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.

  3. Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture

    NASA Astrophysics Data System (ADS)

    Anandkumar, Janavikulam; Szu, Harold H.

    1995-04-01

    This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.

  4. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  5. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  6. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  7. Mouse EEG spike detection based on the adapted continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.

    2016-04-01

    Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.

  8. Research of fetal ECG extraction using wavelet analysis and adaptive filtering.

    PubMed

    Wu, Shuicai; Shen, Yanni; Zhou, Zhuhuang; Lin, Lan; Zeng, Yanjun; Gao, Xiaofeng

    2013-10-01

    Extracting clean fetal electrocardiogram (ECG) signals is very important in fetal monitoring. In this paper, we proposed a new method for fetal ECG extraction based on wavelet analysis, the least mean square (LMS) adaptive filtering algorithm, and the spatially selective noise filtration (SSNF) algorithm. First, abdominal signals and thoracic signals were processed by stationary wavelet transform (SWT), and the wavelet coefficients at each scale were obtained. For each scale, the detail coefficients were processed by the LMS algorithm. The coefficient of the abdominal signal was taken as the original input of the LMS adaptive filtering system, and the coefficient of the thoracic signal as the reference input. Then, correlations of the processed wavelet coefficients were computed. The threshold was set and noise components were removed with the SSNF algorithm. Finally, the processed wavelet coefficients were reconstructed by inverse SWT to obtain fetal ECG. Twenty cases of simulated data and 12 cases of clinical data were used. Experimental results showed that the proposed method outperforms the LMS algorithm: (1) it shows improvement in case of superposition R-peaks of fetal ECG and maternal ECG; (2) noise disturbance is eliminated by incorporating the SSNF algorithm and the extracted waveform is more stable; and (3) the performance is proven quantitatively by SNR calculation. The results indicated that the proposed algorithm can be used for extracting fetal ECG from abdominal signals.

  9. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  10. Baseline Adaptive Wavelet Thresholding Technique for sEMG Denoising

    NASA Astrophysics Data System (ADS)

    Bartolomeo, L.; Zecca, M.; Sessa, S.; Lin, Z.; Mukaeda, Y.; Ishii, H.; Takanishi, Atsuo

    2011-06-01

    The surface Electromyography (sEMG) signal is affected by different sources of noises: current technology is considerably robust to the interferences of the power line or the cable motion artifacts, but still there are many limitations with the baseline and the movement artifact noise. In particular, these sources have frequency spectra that include also the low-frequency components of the sEMG frequency spectrum; therefore, a standard all-bandwidth filtering could alter important information. The Wavelet denoising method has been demonstrated to be a powerful solution in processing white Gaussian noise in biological signals. In this paper we introduce a new technique for the denoising of the sEMG signal: by using the baseline of the signal before the task, we estimate the thresholds to apply to the Wavelet thresholding procedure. The experiments have been performed on ten healthy subjects, by placing the electrodes on the Extensor Carpi Ulnaris and Triceps Brachii on right upper and lower arms, and performing a flexion and extension of the right wrist. An Inertial Measurement Unit, developed in our group, has been used to recognize the movements of the hands to segment the exercise and the pre-task baseline. Finally, we show better performances of the proposed method in term of noise cancellation and distortion of the signal, quantified by a new suggested indicator of denoising quality, compared to the standard Donoho technique.

  11. Isotropic boundary adapted wavelets for coherent vorticity extraction in turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Farge, Marie; Sakurai, Teluo; Yoshimatsu, Katsunori; Schneider, Kai; Morishita, Koji; Ishihara, Takashi

    2015-11-01

    We present a construction of isotropic boundary adapted wavelets, which are orthogonal and yield a multi-resolution analysis. We analyze DNS data of turbulent channel flow computed at a friction-velocity based Reynolds number of 395 and investigate the role of coherent vorticity. Thresholding of the wavelet coefficients allows to split the flow into two parts, coherent and incoherent vorticity. The statistics of the former, i.e., energy and enstrophy spectra, are close to the ones of the total flow, and moreover the nonlinear energy budgets are well preserved. The remaining incoherent part, represented by the large majority of the weak wavelet coefficients, corresponds to a structureless, i.e., noise-like, background flow and exhibits an almost equi-distribution of energy.

  12. Adapted waveform analysis, wavelet packets, and local cosine libraries as a tool for image processing

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.; Woog, Lionel J.

    1995-09-01

    Adapted wave form analysis, refers to a collection of FFT like adapted transform algorithms. Given an image these methods provide special matched collections of templates (orthonormal bases) enabling an efficient coding of the image. Perhaps the closest well known example of such coding method is provided by musical notation, where each segment of music is represented by a musical score made up of notes (templates) characterised by their duration, pitch, location and amplitude, our method corresponds to transcribing the music in as few notes as possible. The extension to images and video is straightforward we describe the image by collections of oscillatory patterns (paint brush strokes)of various sizes locations and amplitudes using a variety of orthogonal bases. These selected basis functions are chosen inside predefined libraries of oscillatory localized functions (trigonometric and wavelet-packets waveforms) so as to optimize the number of parameters needed to describe our object. These algorithms are of complexity N log N opening the door for a large range of applications in signal and image processing, such as compression, feature extraction denoising and enhancement. In particular we describe a class of special purpose compressions for fingerprint irnages, as well as denoising tools for texture and noise extraction. We start by relating traditional Fourier methods to wavelet, wavelet-packet based algorithms using a recent refinement of the windowed sine and cosine transforms. We will then derive an adapted local sine transform show it's relation to wavelet and wavelet-packet analysis and describe an analysis toolkit illustrating the merits of different adaptive and nonadaptive schemes.

  13. Wavelet-based adaptive denoising and baseline correction for MALDI TOF MS.

    PubMed

    Shin, Hyunjin; Sampat, Mehul P; Koomen, John M; Markey, Mia K

    2010-06-01

    Proteomic profiling by MALDI TOF mass spectrometry (MS) is an effective method for identifying biomarkers from human serum/plasma, but the process is complicated by the presence of noise in the spectra. In MALDI TOF MS, the major noise source is chemical noise, which is defined as the interference from matrix material and its clusters. Because chemical noise is nonstationary and nonwhite, wavelet-based denoising is more effective than conventional noise reduction schemes based on Fourier analysis. However, current wavelet-based denoising methods for mass spectrometry do not fully consider the characteristics of chemical noise. In this article, we propose new wavelet-based high-frequency noise reduction and baseline correction methods that were designed based on the discrete stationary wavelet transform. The high-frequency noise reduction algorithm adaptively estimates the time-varying threshold for each frequency subband from multiple realizations of chemical noise and removes noise from mass spectra of samples using the estimated thresholds. The baseline correction algorithm computes the monotonically decreasing baseline in the highest approximation of the wavelet domain. The experimental results demonstrate that our algorithms effectively remove artifacts in mass spectra that are due to chemical noise while preserving informative features as compared to commonly used denoising methods.

  14. A study of interceptor attitude control based on adaptive wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Li, Da; Wang, Qing-chao

    2005-12-01

    This paper engages to study the 3-DOF attitude control problem of the kinetic interceptor. When the kinetic interceptor enters into terminal guidance it has to maneuver with large angles. The characteristic of interceptor attitude system is nonlinearity, strong-coupling and MIMO. A kind of inverse control approach based on adaptive wavelet neural networks was proposed in this paper. Instead of using one complex neural network as the controller, the nonlinear dynamics of the interceptor can be approximated by three independent subsystems applying exact feedback-linearization firstly, and then controllers for each subsystem are designed using adaptive wavelet neural networks respectively. This method avoids computing a large amount of the weights and bias in one massive neural network and the control parameters can be adaptive changed online. Simulation results betray that the proposed controller performs remarkably well.

  15. Adaptive inpainting algorithm based on DCT induced wavelet regularization.

    PubMed

    Li, Yan-Ran; Shen, Lixin; Suter, Bruce W

    2013-02-01

    In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting.

  16. Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-guo; Wang, Ming-jia; Han, Guang-liang

    2011-08-01

    Being an efficient method of information fusion, image fusion has been used in many fields such as machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing, including segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then, based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover

  17. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  18. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    SciTech Connect

    Pipek, Janos; Nagy, Szilvia

    2006-11-07

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.

  19. A new time-adaptive discrete bionic wavelet transform for enhancing speech from adverse noise environment

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui

    2012-04-01

    Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.

  20. A method of adaptive wavelet filtering of the peripheral blood flow oscillations under stationary and non-stationary conditions.

    PubMed

    Tankanag, Arina V; Chemeris, Nikolay K

    2009-10-01

    The paper describes an original method for analysis of the peripheral blood flow oscillations measured with the laser Doppler flowmetry (LDF) technique. The method is based on the continuous wavelet transform and adaptive wavelet theory and applies an adaptive wavelet filtering to the LDF data. The method developed allows one to examine the dynamics of amplitude oscillations in a wide frequency range (from 0.007 to 2 Hz) and to process both stationary and non-stationary short (6 min) signals. The capabilities of the method have been demonstrated by analyzing LDF signals registered in the state of rest and upon humeral occlusion. The paper shows the main advantage of the method proposed, which is the significant reduction of 'border effects', as compared to the traditional wavelet analysis. It was found that the low-frequency amplitudes obtained by adaptive wavelets are significantly higher than those obtained by non-adaptive ones. The method suggested would be useful for the analysis of low-frequency components of the short-living transitional processes under the conditions of functional tests. The method of adaptive wavelet filtering can be used to process stationary and non-stationary biomedical signals (cardiograms, encephalograms, myograms, etc), as well as signals studied in the other fields of science and engineering.

  1. Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.

    PubMed

    Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J

    2015-11-01

    Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems. PMID:25955990

  2. Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.

    PubMed

    Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J

    2015-11-01

    Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems.

  3. Adaptive Wavelet Techniques, Wigner Distributions and the Direct Simulation of the Vlasov Equation

    NASA Astrophysics Data System (ADS)

    Afeyan, Bedros; Douglas, Melissa; Spielman, Rick

    2000-10-01

    The formal analogy between the quantum Liouville equation satisfied by the Wigner function in Quantum Mechanics, and the Vlasov equation satisfied by the single particle distribution function in plasma physics is exploited in order to study the long term evolution of nonlinear electrostatic wave phenomena dictated by the Vlasov-Poisson equations. Adaptive wavelet techniques are used to tile phase space in an optimal manner so as to minimize computational domain sizes and simultaneously to retain accuracy over disparate scales. Traditional MHD calculations will also be analyzed with our wavelet techniques to show the favorable data compression and feature extraction capabilities of multiresolution analysis. Specifically Z51 and Z179 will be compared to show the nature of the improvements in double wire array (Z179) implosions on Z to those obtained with a single wire array (Z51).

  4. Design of adaptive fuzzy wavelet neural sliding mode controller for uncertain nonlinear systems.

    PubMed

    Shahriari kahkeshi, Maryam; Sheikholeslam, Farid; Zekri, Maryam

    2013-05-01

    This paper proposes novel adaptive fuzzy wavelet neural sliding mode controller (AFWN-SMC) for a class of uncertain nonlinear systems. The main contribution of this paper is to design smooth sliding mode control (SMC) for a class of high-order nonlinear systems while the structure of the system is unknown and no prior knowledge about uncertainty is available. The proposed scheme composed of an Adaptive Fuzzy Wavelet Neural Controller (AFWNC) to construct equivalent control term and an Adaptive Proportional-Integral (A-PI) controller for implementing switching term to provide smooth control input. Asymptotical stability of the closed loop system is guaranteed, using the Lyapunov direct method. To show the efficiency of the proposed scheme, some numerical examples are provided. To validate the results obtained by proposed approach, some other methods are adopted from the literature and applied for comparison. Simulation results show superiority and capability of the proposed controller to improve the steady state performance and transient response specifications by using less numbers of fuzzy rules and on-line adaptive parameters in comparison to other methods. Furthermore, control effort has considerably decreased and chattering phenomenon has been completely removed.

  5. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  6. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  7. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  8. Adaptive wavelet simulation of global ocean dynamics using a new Brinkman volume penalization

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-12-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one-dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  9. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov Maxwell system

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain; Sonnendrücker, Eric; Bertrand, Pierre

    2008-08-01

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  10. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    SciTech Connect

    Goffin, Mark A.; Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C.; Smith, Paul N.; Smedley-Stevenson, Richard P.

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.

  11. Powerline interference reduction in ECG signals using empirical wavelet transform and adaptive filtering.

    PubMed

    Singh, Omkar; Sunkaria, Ramesh Kumar

    2015-01-01

    Separating an information-bearing signal from the background noise is a general problem in signal processing. In a clinical environment during acquisition of an electrocardiogram (ECG) signal, The ECG signal is corrupted by various noise sources such as powerline interference (PLI), baseline wander and muscle artifacts. This paper presents novel methods for reduction of powerline interference in ECG signals using empirical wavelet transform (EWT) and adaptive filtering. The proposed methods are compared with the empirical mode decomposition (EMD) based PLI cancellation methods. A total of six methods for PLI reduction based on EMD and EWT are analysed and their results are presented in this paper. The EWT-based de-noising methods have less computational complexity and are more efficient as compared with the EMD-based de-noising methods. PMID:25412942

  12. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  13. Incidental Learning of Collocation

    ERIC Educational Resources Information Center

    Webb, Stuart; Newton, Jonathan; Chang, Anna

    2013-01-01

    This study investigated the effects of repetition on the learning of collocation. Taiwanese university students learning English as a foreign language simultaneously read and listened to one of four versions of a modified graded reader that included different numbers of encounters (1, 5, 10, and 15 encounters) with a set of 18 target collocations.…

  14. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  15. Accurate single-trial detection of movement intention made possible using adaptive wavelet transform.

    PubMed

    Chamanzar, Alireza; Malekmohammadi, Alireza; Bahrani, Masih; Shabany, Mahdi

    2015-01-01

    The outlook of brain-computer interfacing (BCI) is very bright. The real-time, accurate detection of a motor movement task is critical in BCI systems. The poor signal-to-noise-ratio (SNR) of EEG signals and the ambiguity of noise generator sources in brain renders this task quite challenging. In this paper, we demonstrate a novel algorithm for precise detection of the onset of a motor movement through identification of event-related-desynchronization (ERD) patterns. Using an adaptive matched filter technique implemented based on an optimized continues Wavelet transform by selecting an appropriate basis, we can detect single-trial ERDs. Moreover, we use a maximum-likelihood (ML), electrooculography (EOG) artifact removal method to remove eye-related artifacts to significantly improve the detection performance. We have applied this technique to our locally recorded Emotiv(®) data set of 6 healthy subjects, where an average detection selectivity of 85 ± 6% and sensitivity of 88 ± 7.7% is achieved with a temporal precision in the range of -1250 to 367 ms in onset detections of single-trials.

  16. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  17. Surface quality monitoring for process control by on-line vibration analysis using an adaptive spline wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Luo, G. Y.; Osypiw, D.; Irle, M.

    2003-05-01

    The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.

  18. Adaptive approach for variable noise suppression on laser-induced breakdown spectroscopy responses using stationary wavelet transform.

    PubMed

    Schlenke, Jan; Hildebrand, Lars; Moros, Javier; Laserna, J Javier

    2012-11-19

    Spectral signals are often corrupted by noise during their acquisition and transmission. Signal processing refers to a variety of operations that can be carried out on measurements in order to enhance the quality of information. In this sense, signal denoising is used to reduce noise distortions while keeping alterations of the important signal features to a minimum. The minimization of noise is a highly critical task since, in many cases, there is no prior knowledge of the signal or of the noise. In the context of denoising, wavelet transformation has become a valuable tool. The present paper proposes a noise reduction technique for suppressing noise in laser-induced breakdown spectroscopy (LIBS) signals using wavelet transform. An extension of the Donoho's scheme, which uses a redundant form of wavelet transformation and an adaptive threshold estimation method, is suggested. Capabilities and results achieved on denoising processes of artificial signals and actual spectroscopic data, both corrupted by noise with changing intensities, are presented. In order to better consolidate the gains so far achieved by the proposed strategy, a comparison with alternative approaches, as well as with traditional techniques, is also made.

  19. Spectral collocation methods

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Kopriva, D. A.; Patera, A. T.

    1987-01-01

    This review covers the theory and application of spectral collocation methods. Section 1 describes the fundamentals, and summarizes results pertaining to spectral approximations of functions. Some stability and convergence results are presented for simple elliptic, parabolic, and hyperbolic equations. Applications of these methods to fluid dynamics problems are discussed in Section 2.

  20. Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network

    NASA Astrophysics Data System (ADS)

    Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao

    2009-10-01

    A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.

  1. Anatomically-adapted graph wavelets for improved group-level fMRI activation mapping.

    PubMed

    Behjat, Hamid; Leonardi, Nora; Sörnmo, Leif; Van De Ville, Dimitri

    2015-12-01

    A graph based framework for fMRI brain activation mapping is presented. The approach exploits the spectral graph wavelet transform (SGWT) for the purpose of defining an advanced multi-resolutional spatial transformation for fMRI data. The framework extends wavelet based SPM (WSPM), which is an alternative to the conventional approach of statistical parametric mapping (SPM), and is developed specifically for group-level analysis. We present a novel procedure for constructing brain graphs, with subgraphs that separately encode the structural connectivity of the cerebral and cerebellar gray matter (GM), and address the inter-subject GM variability by the use of template GM representations. Graph wavelets tailored to the convoluted boundaries of GM are then constructed as a means to implement a GM-based spatial transformation on fMRI data. The proposed approach is evaluated using real as well as semi-synthetic multi-subject data. Compared to SPM and WSPM using classical wavelets, the proposed approach shows superior type-I error control. The results on real data suggest a higher detection sensitivity as well as the capability to capture subtle, connected patterns of brain activity.

  2. Collocations in Language Learning: Corpus-Based Automatic Compilation of Collocations and Bilingual Collocation Concordancer.

    ERIC Educational Resources Information Center

    Kita, Kenji; Ogata, Hiroaki

    1997-01-01

    Presents an efficient method for extracting collocations from corpora, which uses the cost criteria measure and a tree-based data structure. Proposes a bilingual collocation concordancer, a tool that provides language learners with collocation correspondences between a native and foreign language. (Eight references) (Author/CK)

  3. An Adaptive Wavelet-Based Denoising Algorithm for Enhancing Speech in Non-stationary Noise Environment

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Ching

    Traditional wavelet-based speech enhancement algorithms are ineffective in the presence of highly non-stationary noise because of the difficulties in the accurate estimation of the local noise spectrum. In this paper, a simple method of noise estimation employing the use of a voice activity detector is proposed. We can improve the output of a wavelet-based speech enhancement algorithm in the presence of random noise bursts according to the results of VAD decision. The noisy speech is first preprocessed using bark-scale wavelet packet decomposition (BSWPD) to convert a noisy signal into wavelet coefficients (WCs). It is found that the VAD using bark-scale spectral entropy, called as BS-Entropy, parameter is superior to other energy-based approach especially in variable noise-level. The wavelet coefficient threshold (WCT) of each subband is then temporally adjusted according to the result of VAD approach. In a speech-dominated frame, the speech is categorized into either a voiced frame or an unvoiced frame. A voiced frame possesses a strong tone-like spectrum in lower subbands, so that the WCs of lower-band must be reserved. On the contrary, the WCT tends to increase in lower-band if the speech is categorized as unvoiced. In a noise-dominated frame, the background noise can be almost completely removed by increasing the WCT. The objective and subjective experimental results are then used to evaluate the proposed system. The experiments show that this algorithm is valid on various noise conditions, especially for color noise and non-stationary noise conditions.

  4. A wavelet-based Projector Augmented-Wave (PAW) method: Reaching frozen-core all-electron precision with a systematic, adaptive and localized wavelet basis set

    NASA Astrophysics Data System (ADS)

    Rangel, T.; Caliste, D.; Genovese, L.; Torrent, M.

    2016-11-01

    We present a Projector Augmented-Wave (PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order- N simulations within a PAW method.

  5. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system

    SciTech Connect

    Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre

    2008-08-10

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  6. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  7. A new method based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for recognition of urine cells from microscopic images independent of rotation and scaling.

    PubMed

    Avci, Derya; Leblebicioglu, Mehmet Kemal; Poyraz, Mustafa; Dogantekin, Esin

    2014-02-01

    So far, analysis and classification of urine cells number has become an important topic for medical diagnosis of some diseases. Therefore, in this study, we suggest a new technique based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for Recognition of Urine Cells from Microscopic Images Independent of Rotation and Scaling. Some digital image processing methods such as noise reduction, contrast enhancement, segmentation, and morphological process are used for feature extraction stage of this ADWEENN in this study. Nowadays, the image processing and pattern recognition topics have come into prominence. The image processing concludes operation and design of systems that recognize patterns in data sets. In the past years, very difficulty in classification of microscopic images was the deficiency of enough methods to characterize. Lately, it is seen that, multi-resolution image analysis methods such as Gabor filters, discrete wavelet decompositions are superior to other classic methods for analysis of these microscopic images. In this study, the structure of the ADWEENN method composes of four stages. These are preprocessing stage, feature extraction stage, classification stage and testing stage. The Discrete Wavelet Transform (DWT) and adaptive wavelet entropy and energy is used for adaptive feature extraction in feature extraction stage to strengthen the premium features of the Artificial Neural Network (ANN) classifier in this study. Efficiency of the developed ADWEENN method was tested showing that an avarage of 97.58% recognition succes was obtained.

  8. The Assessment of Muscular Effort, Fatigue, and Physiological Adaptation Using EMG and Wavelet Analysis

    PubMed Central

    Graham, Ryan B.; Wachowiak, Mark P.; Gurd, Brendon J.

    2015-01-01

    Peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α) is a transcription factor co-activator that helps coordinate mitochondrial biogenesis within skeletal muscle following exercise. While evidence gleaned from submaximal exercise suggests that intracellular pathways associated with the activation of PGC-1α, as well as the expression of PGC-1α itself are activated to a greater extent following higher intensities of exercise, we have recently shown that this effect does not extend to supramaximal exercise, despite corresponding increases in muscle activation amplitude measured with electromyography (EMG). Spectral analyses of EMG data may provide a more in-depth assessment of changes in muscle electrophysiology occurring across different exercise intensities, and therefore the goal of the present study was to apply continuous wavelet transforms (CWTs) to our previous data to comprehensively evaluate: 1) differences in muscle electrophysiological properties at different exercise intensities (i.e. 73%, 100%, and 133% of peak aerobic power), and 2) muscular effort and fatigue across a single interval of exercise at each intensity, in an attempt to shed mechanistic insight into our previous observations that the increase in PGC-1α is dissociated from exercise intensity following supramaximal exercise. In general, the CWTs revealed that localized muscle fatigue was only greater than the 73% condition in the 133% exercise intensity condition, which directly matched the work rate results. Specifically, there were greater drop-offs in frequency, larger changes in burst power, as well as greater changes in burst area under this intensity, which were already observable during the first interval. As a whole, the results from the present study suggest that supramaximal exercise causes extreme localized muscular fatigue, and it is possible that the blunted PGC-1α effects observed in our previous study are the result of fatigue-associated increases in

  9. Modeling and control of nonlinear systems using novel fuzzy wavelet networks: The output adaptive control approach

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyyed Hossein; Noroozi, Navid; Safavi, Ali Akbar; Ebadat, Afrooz

    2011-09-01

    This paper proposes an observer based self-structuring robust adaptive fuzzy wave-net (FWN) controller for a class of nonlinear uncertain multi-input multi-output systems. The control signal is comprised of two parts. The first part arises from an adaptive fuzzy wave-net based controller that approximates the system structural uncertainties. The second part comes from a robust H∞ based controller that is used to attenuate the effect of function approximation error and disturbance. Moreover, a new self structuring algorithm is proposed to determine the location of basis functions. Simulation results are provided for a two DOF robot to show the effectiveness of the proposed method.

  10. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  11. Multiscale viscoacoustic waveform inversion with the second generation wavelet transform and adaptive time-space domain finite-difference method

    NASA Astrophysics Data System (ADS)

    Ren, Zhiming; Liu, Yang; Zhang, Qunshan

    2014-05-01

    Full waveform inversion (FWI) has the potential to provide preferable subsurface model parameters. The main barrier of its applications to real seismic data is heavy computational amount. Numerical modelling methods are involved in both forward modelling and backpropagation of wavefield residuals, which spend most of computational time in FWI. We develop a time-space domain finite-difference (FD) method and adaptive variable-length spatial operator scheme in numerical simulation of viscoacoustic equation and extend them into the viscoacoustic FWI. Compared with conventional FD methods, different operator lengths are adopted for different velocities and quality factors, which can reduce the amount of computation without reducing accuracy. Inversion algorithms also play a significant role in FWI. In conventional single-scale methods, it is likely to converge to local minimums especially when the initial model is far from the real model. To tackle the problem, we introduce the second generation wavelet transform to implement the multiscale FWI. Compared to other multiscale methods, our method has advantages of ease of implementation and better time-frequency local analysis ability. The L2 norm is widely used in FWI and gives invalid model estimates when the data is contaminated with strong non-uniform noises. We apply the L1-norm and the Huber-norm criteria in the time-domain FWI to improve its antinoise ability. Our strategies have been successfully applied in synthetic experiments to both onshore and offshore reflection seismic data. The results of the viscoacoustic Marmousi example indicate that our new FWI scheme consumes smaller computer resources. In addition, the viscoacoustic Overthrust example shows its better convergence and more reasonable velocity and quality factor structures. All these results demonstrate that our method can improve inversion accuracy and computational efficiency of FWI.

  12. Wavelets and electromagnetics

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.

    1992-01-01

    Wavelets are an exciting new topic in applied mathematics and signal processing. This paper will provide a brief review of wavelets which are also known as families of functions with an emphasis on interpretation rather than rigor. We will derive an indirect use of wavelets for the solution of integral equations based techniques adapted from image processing. Examples for resistive strips will be given illustrating the effect of these techniques as well as their promise in reducing dramatically the requirement in order to solve an integral equation for large bodies. We also will present a direct implementation of wavelets to solve an integral equation. Both methods suggest future research topics and may hold promise for a variety of uses in computational electromagnetics.

  13. Learning Collocations: Do the Number of Collocates, Position of the Node Word, and Synonymy Affect Learning?

    ERIC Educational Resources Information Center

    Webb, Stuart; Kagimoto, Eve

    2011-01-01

    This study investigated the effects of three factors (the number of collocates per node word, the position of the node word, synonymy) on learning collocations. Japanese students studying English as a foreign language learned five sets of 12 target collocations. Each collocation was presented in a single glossed sentence. The number of collocates…

  14. Collocations: A Neglected Variable in EFL.

    ERIC Educational Resources Information Center

    Farghal, Mohammed; Obiedat, Hussein

    1995-01-01

    Addresses the issue of collocations as an important and neglected variable in English-as-a-Foreign-Language classes. Two questionnaires, in English and Arabic, involving common collocations relating to food, color, and weather were administered to English majors and English language teachers. Results show both groups deficient in collocations. (36…

  15. Interlanguage Development and Collocational Clash

    ERIC Educational Resources Information Center

    Shahheidaripour, Gholamabbass

    2000-01-01

    Background: Persian English learners committed mistakes and errors which were due to insufficient knowledge of different senses of the words and collocational structures they formed. Purpose: The study reported here was conducted for a thesis submitted in partial fulfillment of the requirements for The Master of Arts degree, School of Graduate…

  16. Mr. Stockdale's Dictionary of Collocations.

    ERIC Educational Resources Information Center

    Stockdale, Joseph Gagen, III

    This dictionary of collocations was compiled by an English-as-a-Second-Language (ESL) teacher in Saudi Arabia who teaches adult, native speakers of Arabic. The dictionary is practical in teaching English because it helps to focus on everyday events and situations. The dictionary works as follows: the teacher looks up a word, such as "talk"; next…

  17. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  18. The use of wavelet transforms in the solution of two-phase flow problems

    SciTech Connect

    Moridis, G.J.; Nikolaou, M.; You, Yong

    1994-10-01

    In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts.

  19. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  20. Investigating ESL Learners' Lexical Collocations: The Acquisition of Verb + Noun Collocations by Japanese Learners of English

    ERIC Educational Resources Information Center

    Miyakoshi, Tomoko

    2009-01-01

    Although it is widely acknowledged that collocations play an important part in second language learning, especially at intermediate-advanced levels, learners' difficulties with collocations have not been investigated in much detail so far. The present study examines ESL learners' use of verb-noun collocations, such as "take notes," "place an…

  1. Collocation and Technicality in EAP Engineering

    ERIC Educational Resources Information Center

    Ward, Jeremy

    2007-01-01

    This article explores how collocation relates to lexical technicality, and how the relationship can be exploited for teaching EAP to second-year engineering students. First, corpus data are presented to show that complex noun phrase formation is a ubiquitous feature of engineering text, and that these phrases (or collocations) are highly…

  2. Supporting Collocation Learning with a Digital Library

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2010-01-01

    Extensive knowledge of collocations is a key factor that distinguishes learners from fluent native speakers. Such knowledge is difficult to acquire simply because there is so much of it. This paper describes a system that exploits the facilities offered by digital libraries to provide a rich collocation-learning environment. The design is based on…

  3. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  4. The Effect of Input Enhancement of Collocations in Reading on Collocation Learning and Retention of EFL Learners

    ERIC Educational Resources Information Center

    Goudarzi, Zahra; Moini, M. Raouf

    2012-01-01

    Collocation is one of the most problematic areas in second language learning and it seems that if one wants to improve his or her communication in another language should improve his or her collocation competence. This study attempts to determine the effect of applying three different kinds of collocation on collocation learning and retention of…

  5. The Impact of Corpus-Based Collocation Instruction on Iranian EFL Learners' Collocation Learning

    ERIC Educational Resources Information Center

    Ashouri, Shabnam; Arjmandi, Masoume; Rahimi, Ramin

    2014-01-01

    Over the past decades, studies of EFL/ESL vocabulary acquisition have identified the significance of collocations in language learning. Due to the fact that collocations have been regarded as one of the major concerns of both EFL teachers and learners for many years, the present study attempts to shed light on the impact of corpus-based…

  6. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  7. Results of laser ranging collocations during 1983

    NASA Technical Reports Server (NTRS)

    Kolenkiewicz, R.

    1984-01-01

    The objective of laser ranging collocations is to compare the ability of two satellite laser ranging systems, located in the vicinity of one another, to measure the distance to an artificial Earth satellite in orbit over the sites. The similar measurement of this distance is essential before a new or modified laser system is deployed to worldwide locations in order to gather the data necessary to meet the scientific goals of the Crustal Dynamics Project. In order to be certain the laser systems are operating properly, they are periodically compared with each other. These comparisons or collocations are performed by locating the lasers side by side when they track the same satellite during the same time or pass. The data is then compared to make sure the lasers are giving essentially the same range results. Results of the three collocations performed during 1983 are given.

  8. Detection of motor imagery of swallow EEG signals based on the dual-tree complex wavelet transform and adaptive model selection

    NASA Astrophysics Data System (ADS)

    Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai

    2014-06-01

    Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.

  9. Gauging the Effects of Exercises on Verb-Noun Collocations

    ERIC Educational Resources Information Center

    Boers, Frank; Demecheleer, Murielle; Coxhead, Averil; Webb, Stuart

    2014-01-01

    Many contemporary textbooks for English as a foreign language (EFL) and books for vocabulary study contain exercises with a focus on collocations, with verb-noun collocations (e.g. "make a mistake") being particularly popular as targets for collocation learning. Common exercise formats used in textbooks and other pedagogic materials…

  10. Corpus-Based versus Traditional Learning of Collocations

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2015-01-01

    One of the aspects of knowing a word is the knowledge of which words it is usually used with. Since knowledge of collocations is essential for appropriate and fluent use of language, learning collocations should have a central place in the study of vocabulary. There are different opinions about the best ways of learning collocations. This study…

  11. Schwarz and multilevel methods for quadratic spline collocation

    SciTech Connect

    Christara, C.C.; Smith, B.

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  12. Evaluating techniques for multivariate classification of non-collocated spatial data.

    SciTech Connect

    McKenna, Sean Andrew

    2004-09-01

    Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach

  13. Collocation method for fractional quantum mechanics

    SciTech Connect

    Amore, Paolo; Hofmann, Christoph P.; Saenz, Ricardo A.; Fernandez, Francisco M.

    2010-12-15

    We show that it is possible to obtain numerical solutions to quantum mechanical problems involving a fractional Laplacian, using a collocation approach based on little sinc functions, which discretizes the Schroedinger equation on a uniform grid. The different boundary conditions are naturally implemented using sets of functions with the appropriate behavior. Good convergence properties are observed. A comparison with results based on a Wentzel-Kramers-Brillouin analysis is performed.

  14. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  15. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  16. Review of wavelet transforms for pattern recognitions

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1996-03-01

    After relating the adaptive wavelet transform to the human visual and hearing systems, we exploit the synergism between such a smart sensor processing with brain-style neural network computing. The freedom of choosing an appropriate kernel of a linear transform, which is given to us by the recent mathematical foundation of the wavelet transform, is exploited fully and is generally called the adaptive wavelet transform (WT). However, there are several levels of adaptivity: (1) optimum coefficients: adjustable transform coefficients chosen with respect to a fixed mother kernel for better invariant signal representation, (2) super-mother: grouping different scales of daughter wavelets of same or different mother wavelets at different shift location into a new family called a superposition mother kernel for better speech signal classification, (3) variational calculus to determine ab initio a constraint optimization mother for a specific task. The tradeoff between the mathematical rigor of the complete orthonormality and the speed of order (N) with the adaptive flexibility is finally up to the user's needs. Then, to illustrate (1), a new invariant optoelectronic architecture of a wedge- shape filter in the WT domain is given for scale-invariant signal classification by neural networks.

  17. NOKIN1D: one-dimensional neutron kinetics based on a nodal collocation method

    NASA Astrophysics Data System (ADS)

    Verdú, G.; Ginestar, D.; Miró, R.; Jambrina, A.; Barrachina, T.; Soler, Amparo; Concejal, Alberto

    2014-06-01

    The TRAC-BF1 one-dimensional kinetic model is a formulation of the neutron diffusion equation in the two energy groups' approximation, based on the analytical nodal method (ANM). The advantage compared with a zero-dimensional kinetic model is that the axial power profile may vary with time due to thermal-hydraulic parameter changes and/or actions of the control systems but at has the disadvantages that in unusual situations it fails to converge. The nodal collocation method developed for the neutron diffusion equation and applied to the kinetics resolution of TRAC-BF1 thermal-hydraulics, is an adaptation of the traditional collocation methods for the discretization of partial differential equations, based on the development of the solution as a linear combination of analytical functions. It has chosen to use a nodal collocation method based on a development of Legendre polynomials of neutron fluxes in each cell. The qualification is carried out by the analysis of the turbine trip transient from the NEA benchmark in Peach Bottom NPP using both the original 1D kinetics implemented in TRAC-BF1 and the 1D nodal collocation method.

  18. Digital audio signal filtration based on the dual-tree wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, A. S.; Pavlov, A. N.

    2015-07-01

    A new method of digital audio signal filtration based on the dual-tree wavelet transform is described. An adaptive approach is proposed that allows the automatic adjustment of parameters of the wavelet filter to be optimized. A significant improvement of the quality of signal filtration is demonstrated in comparison to the traditionally used filters based on the discrete wavelet transform.

  19. Rotation and Scale Invariant Wavelet Feature for Content-Based Texture Image Retrieval.

    ERIC Educational Resources Information Center

    Lee, Moon-Chuen; Pun, Chi-Man

    2003-01-01

    Introduces a rotation and scale invariant log-polar wavelet texture feature for image retrieval. The underlying feature extraction process involves a log-polar transform followed by an adaptive row shift invariant wavelet packet transform. Experimental results show that this rotation and scale invariant wavelet feature is quite effective for image…

  20. Wavelet based image visibility enhancement of IR images

    NASA Astrophysics Data System (ADS)

    Jiang, Qin; Owechko, Yuri; Blanton, Brendan

    2016-05-01

    Enhancing the visibility of infrared images obtained in a degraded visibility environment is very important for many applications such as surveillance, visual navigation in bad weather, and helicopter landing in brownout conditions. In this paper, we present an IR image visibility enhancement system based on adaptively modifying the wavelet coefficients of the images. In our proposed system, input images are first filtered by a histogram-based dynamic range filter in order to remove sensor noise and convert the input images into 8-bit dynamic range for efficient processing and display. By utilizing a wavelet transformation, we modify the image intensity distribution and enhance image edges simultaneously. In the wavelet domain, low frequency wavelet coefficients contain original image intensity distribution while high frequency wavelet coefficients contain edge information for the original images. To modify the image intensity distribution, an adaptive histogram equalization technique is applied to the low frequency wavelet coefficients while to enhance image edges, an adaptive edge enhancement technique is applied to the high frequency wavelet coefficients. An inverse wavelet transformation is applied to the modified wavelet coefficients to obtain intensity images with enhanced visibility. Finally, a Gaussian filter is used to remove blocking artifacts introduced by the adaptive techniques. Since wavelet transformation uses down-sampling to obtain low frequency wavelet coefficients, histogram equalization of low-frequency coefficients is computationally more efficient than histogram equalization of the original images. We tested the proposed system with degraded IR images obtained from a helicopter landing in brownout conditions. Our experimental results show that the proposed system is effective for enhancing the visibility of degraded IR images.

  1. Legendre Wavelet Operational Matrix of fractional Derivative through wavelet-polynomial transformation and its Applications in Solving Fractional Order Brusselator system

    NASA Astrophysics Data System (ADS)

    Chang, Phang; Isah, Abdulnasir

    2016-02-01

    In this paper we propose the wavelet operational method based on shifted Legendre polynomial to obtain the numerical solutions of nonlinear fractional-order chaotic system known by fractional-order Brusselator system. The operational matrices of fractional derivative and collocation method turn the nonlinear fractional-order Brusselator system to a system of algebraic equations. Two illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques.

  2. Profiling the Collocation Use in ELT Textbooks and Learner Writing

    ERIC Educational Resources Information Center

    Tsai, Kuei-Ju

    2015-01-01

    The present study investigates the collocational profiles of (1) three series of graded textbooks for English as a foreign language (EFL) commonly used in Taiwan, (2) the written productions of EFL learners, and (3) the written productions of native speakers (NS) of English. These texts were examined against a purpose-built collocation list. Based…

  3. Collocations of High Frequency Noun Keywords in Prescribed Science Textbooks

    ERIC Educational Resources Information Center

    Menon, Sujatha; Mukundan, Jayakaran

    2012-01-01

    This paper analyses the discourse of science through the study of collocational patterns of high frequency noun keywords in science textbooks used by upper secondary students in Malaysia. Research has shown that one of the areas of difficulty in science discourse concerns lexis, especially that of collocations. This paper describes a corpus-based…

  4. Subcell resolution in simplex stochastic collocation for spatial discontinuities

    SciTech Connect

    Witteveen, Jeroen A.S.; Iaccarino, Gianluca

    2013-10-15

    Subcell resolution has been used in the Finite Volume Method (FVM) to obtain accurate approximations of discontinuities in the physical space. Stochastic methods are usually based on local adaptivity for resolving discontinuities in the stochastic dimensions. However, the adaptive refinement in the probability space is ineffective in the non-intrusive uncertainty quantification framework, if the stochastic discontinuity is caused by a discontinuity in the physical space with a random location. The dependence of the discontinuity location in the probability space on the spatial coordinates then results in a staircase approximation of the statistics, which leads to first-order error convergence and an underprediction of the maximum standard deviation. To avoid these problems, we introduce subcell resolution into the Simplex Stochastic Collocation (SSC) method for obtaining a truly discontinuous representation of random spatial discontinuities in the interior of the cells discretizing the probability space. The presented SSC–SR method is based on resolving the discontinuity location in the probability space explicitly as function of the spatial coordinates and extending the stochastic response surface approximations up to the predicted discontinuity location. The applications to a linear advection problem, the inviscid Burgers’ equation, a shock tube problem, and the transonic flow over the RAE 2822 airfoil show that SSC–SR resolves random spatial discontinuities with multiple stochastic and spatial dimensions accurately using a minimal number of samples.

  5. Collocation methods for distillation design. 1: Model description and testing

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    Fast and accurate distillation design requires a model that significantly reduces the problem size while accurately approximating a full-order distillation column model. This collocation model builds on the concepts of past collocation models for design of complex real-world separation systems. Two variable transformations make this method unique. Polynomials cannot accurately fit trajectories which flatten out. In columns, flat sections occur in the middle of large column sections or where concentrations go to 0 or 1. With an exponential transformation of the tray number which maps zero to an infinite number of trays onto the range 0--1, four collocation trays can accurately simulate a large column section. With a hyperbolic tangent transformation of the mole fractions, the model can simulate columns which reach high purities. Furthermore, this model uses multiple collocation elements for a column section, which is more accurate than a single high-order collocation section.

  6. A Study on the Phenomenon of Collocations: Methodology of Teaching English and German Collocations to Russian Students

    ERIC Educational Resources Information Center

    Varlamova, Elena V.; Naciscione, Anita; Tulusina, Elena A.

    2016-01-01

    Relevance of the issue stated in the article is determined by the fact that there is a lack of research devoted to the methods of teaching English and German collocations. The aim of our work is to determine methods of teaching English and German collocations to Russian university students studying foreign languages through experimental testing.…

  7. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  8. Orthogonal collocation of the nonlinear Boltzman equation

    NASA Astrophysics Data System (ADS)

    Morin, T. J.; Hawley, M. C.

    1985-07-01

    A numerical solution to the nonlinear Boltzmann equation for Maxwell molecules, including the momentum conserving kernel by the method of orthogonal collocation, is presented and compared with the similarity solution of Krupp (1967), Bobylev (1975), Krook and Wu (1976) (KBKW). Excellent agreement is found between the two for KBKW initial values. The calculations of the evolution of a distribution function from nonKBKW initial conditions are examined. The correlation of the nonKBKW trajectories to the presence of a robust unstable manifold in the eigenspace of the linearized Boltzmann equation is considered. The results of a linear analysis are compared with the work of Wang Chang and Uhlenbeck (1952). The implications of the results for the relaxation of nonequilibrium distribution functions are discussed.

  9. Covariance modeling in geodetic applications of collocation

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Cazzaniga, Noemi; De Gaetani, Carlo; Reguzzoni, Mirko

    2014-05-01

    Collocation method is widely applied in geodesy for estimating/interpolating gravity related functionals. The crucial problem of this approach is the correct modeling of the empirical covariance functions of the observations. Different methods for getting reliable covariance models have been proposed in the past by many authors. However, there are still problems in fitting the empirical values, particularly when different functionals of T are used and combined. Through suitable linear combinations of positive degree variances a model function that properly fits the empirical values can be obtained. This kind of condition is commonly handled by solver algorithms in linear programming problems. In this work the problem of modeling covariance functions has been dealt with an innovative method based on the simplex algorithm. This requires the definition of an objective function to be minimized (or maximized) where the unknown variables or their linear combinations are subject to some constraints. The non-standard use of the simplex method consists in defining constraints on model covariance function in order to obtain the best fit on the corresponding empirical values. Further constraints are applied so to have coherence with model degree variances to prevent possible solutions with no physical meaning. The fitting procedure is iterative and, in each iteration, constraints are strengthened until the best possible fit between model and empirical functions is reached. The results obtained during the test phase of this new methodology show remarkable improvements with respect to the software packages available until now. Numerical tests are also presented to check for the impact that improved covariance modeling has on the collocation estimate.

  10. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  11. Lifting wavelet method of target detection

    NASA Astrophysics Data System (ADS)

    Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin

    2009-11-01

    Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.

  12. The Learning Burden of Collocations: The Role of Interlexical and Intralexical Factors

    ERIC Educational Resources Information Center

    Peters, Elke

    2016-01-01

    This study investigates whether congruency (+/- literal translation equivalent), collocate-node relationship (adjective-noun, verb-noun, phrasal-verb-noun collocations), and word length influence the learning burden of EFL learners' learning collocations at the initial stage of form-meaning mapping. Eighteen collocations were selected on the basis…

  13. Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting

    2016-01-01

    The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…

  14. The Use of Verb Noun Collocations in Writing Stories among Iranian EFL Learners

    ERIC Educational Resources Information Center

    Bazzaz, Fatemeh Ebrahimi; Samad, Arshad Abd

    2011-01-01

    An important aspect of native speakers' communicative competence is collocational competence which involves knowing which words usually come together and which do not. This paper investigates the possible relationship between knowledge of collocations and the use of verb noun collocation in writing stories because collocational knowledge…

  15. Usability Study of Two Collocated Prototype System Displays

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2007-01-01

    Currently, most of the displays in control rooms can be categorized as status screens, alerts/procedures screens (or paper), or control screens (where the state of a component is changed by the operator). The primary focus of this line of research is to determine which pieces of information (status, alerts/procedures, and control) should be collocated. Two collocated displays were tested for ease of understanding in an automated desktop survey. This usability study was conducted as a prelude to a larger human-in-the-loop experiment in order to verify that the 2 new collocated displays were easy to learn and usable. The results indicate that while the DC display was preferred and yielded better performance than the MDO display, both collocated displays can be easily learned and used.

  16. Periodized Daubechies wavelets

    SciTech Connect

    Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.

    1996-03-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.

  17. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.; Franco-Villoria, M.

    2014-06-01

    The quantification of measurement uncertainty of atmospheric parameters is a key factor in assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical contributions to the uncertainty budget is related to the collocation mismatch in space and time among observations made at different locations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or lidar. In this paper we propose a statistical modelling approach capable of explaining the relationship between collocation uncertainty and a set of environmental factors, height and distance between imperfectly collocated trajectories. The new statistical approach is based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows a natural definition of uncertainty profiles. Along this line, a five-fold decomposition of the total collocation uncertainty is proposed, giving both a profile budget and an integrated column budget. HFR is a data-driven approach valid for any atmospheric parameter, which can be assumed smooth. It is illustrated here by means of the collocation uncertainty analysis of relative humidity from two stations involved in the GCOS reference upper-air network (GRUAN). In this case, 85% of the total collocation uncertainty is ascribed to reducible environmental error, 11% to irreducible environmental error, 3.4% to adjustable bias, 0.1% to sampling error and 0.2% to measurement error.

  18. EEG Artifact Removal Using a Wavelet Neural Network

    NASA Technical Reports Server (NTRS)

    Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    !n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.

  19. Wavelets meet genetic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ping

    2005-08-01

    Genetic image analysis is an interdisciplinary area, which combines microscope image processing techniques with the use of biochemical probes for the detection of genetic aberrations responsible for cancers and genetic diseases. Recent years have witnessed parallel and significant progress in both image processing and genetics. On one hand, revolutionary multiscale wavelet techniques have been developed in signal processing and applied mathematics in the last decade, providing sophisticated tools for genetic image analysis. On the other hand, reaping the fruit of genome sequencing, high resolution genetic probes have been developed to facilitate accurate detection of subtle and cryptic genetic aberrations. In the meantime, however, they bring about computational challenges for image analysis. In this paper, we review the fruitful interaction between wavelets and genetic imaging. We show how wavelets offer a perfect tool to address a variety of chromosome image analysis problems. In fact, the same word "subband" has been used in the nomenclature of cytogenetics to describe the multiresolution banding structure of the chromosome, even before its appearance in the wavelet literature. The application of wavelets to chromosome analysis holds great promise in addressing several computational challenges in genetics. A variety of real world examples such as the chromosome image enhancement, compression, registration and classification will be demonstrated. These examples are drawn from fluorescence in situ hybridization (FISH) and microarray (gene chip) imaging experiments, which indicate the impact of wavelets on the diagnosis, treatments and prognosis of cancers and genetic diseases.

  20. Continuous Groundwater Monitoring Collocated at USGS Streamgages

    NASA Astrophysics Data System (ADS)

    Constantz, J. E.; Eddy-Miller, C.; Caldwell, R.; Wheeer, J.; Barlow, J.

    2012-12-01

    USGS Office of Groundwater funded a 2-year pilot study collocating groundwater wells for monitoring water level and temperature at several existing continuous streamgages in Montana and Wyoming, while U.S. Army Corps of Engineers funded enhancement to streamgages in Mississippi. To increase spatial relevance with in a given watershed, study sites were selected where near-stream groundwater was in connection with an appreciable aquifer, and where logistics and cost of well installations were considered representative. After each well installation and surveying, groundwater level and temperature were easily either radio-transmitted or hardwired to existing data acquisition system located in streamgaging shelter. Since USGS field personnel regularly visit streamgages during routine streamflow measurements and streamgage maintenance, the close proximity of observation wells resulted in minimum extra time to verify electronically transmitted measurements. After field protocol was tuned, stream and nearby groundwater information were concurrently acquired at streamgages and transmitted to satellite from seven pilot-study sites extending over nearly 2,000 miles (3,200 km) of the central US from October 2009 until October 2011, for evaluating the scientific and engineering add-on value of the enhanced streamgage design. Examination of the four-parameter transmission from the seven pilot study groundwater gaging stations reveals an internally consistent, dynamic data suite of continuous groundwater elevation and temperature in tandem with ongoing stream stage and temperature data. Qualitatively, the graphical information provides appreciation of seasonal trends in stream exchanges with shallow groundwater, as well as thermal issues of concern for topics ranging from ice hazards to suitability of fish refusia, while quantitatively this information provides a means for estimating flux exchanges through the streambed via heat-based inverse-type groundwater modeling. In June

  1. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    SciTech Connect

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-11-20

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L{sup 2} error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods.

  2. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  3. Collocation and Pattern Recognition Effects on System Failure Remediation

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Press, Hayes N.

    2007-01-01

    Previous research found that operators prefer to have status, alerts, and controls located on the same screen. Unfortunately, that research was done with displays that were not designed specifically for collocation. In this experiment, twelve subjects evaluated two displays specifically designed for collocating system information against a baseline that consisted of dial status displays, a separate alert area, and a controls panel. These displays differed in the amount of collocation, pattern matching, and parameter movement compared to display size. During the data runs, subjects kept a randomly moving target centered on a display using a left-handed joystick and they scanned system displays to find a problem in order to correct it using the provided checklist. Results indicate that large parameter movement aided detection and then pattern recognition is needed for diagnosis but the collocated displays centralized all the information subjects needed, which reduced workload. Therefore, the collocated display with large parameter movement may be an acceptable display after familiarization because of the possible pattern recognition developed with training and its use.

  4. Multi-quadric collocation model of horizontal crustal movement

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Zeng, Anmin; Ming, Feng; Jing, Yifan

    2016-05-01

    To establish the horizontal crustal movement velocity field of the Chinese mainland, a Hardy multi-quadric fitting model and collocation are usually used. However, the kernel function, nodes, and smoothing factor are difficult to determine in the Hardy function interpolation. Furthermore, the covariance function of the stochastic signal must be carefully constructed in the collocation model, which is not trivial. In this paper, a new combined estimation method for establishing the velocity field, based on collocation and multi-quadric equation interpolation, is presented. The crustal movement estimation simultaneously takes into consideration an Euler vector as the crustal movement trend and the local distortions as the stochastic signals, and a kernel function of the multi-quadric fitting model substitutes for the covariance function of collocation. The velocities of a set of 1070 reference stations were obtained from the Crustal Movement Observation Network of China, and the corresponding velocity field was established using the new combined estimation method. A total of 85 reference stations were used as checkpoints, and the precision in the north and east component was 1.25 and 0.80 mm yr-1, respectively. The result obtained by the new method corresponds with the collocation method and multi-quadric interpolation without requiring the covariance equation for the signals.

  5. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.

    2013-08-01

    The uncertainty of important atmospheric parameters is a key factor for assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical points of the uncertainty budget is related to the collocation mismatch in space and time among different observations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or LIDAR. In this paper we consider a statistical modelling approach to understand at which extent collocation uncertainty is related to environmental factors, height and distance between the trajectories. To do this we introduce a new statistical approach, based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows us a natural definition of uncertainty profiles. Moreover, using this modelling approach, a five-folded uncertainty decomposition is proposed. Eventually, the HFR approach is illustrated by the collocation uncertainty analysis of relative humidity from two stations involved in GCOS reference upper-air network (GRUAN).

  6. Application of wavelet analysis in laser Doppler vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Lan, Yu-fei; Xue, Hui-feng; Li, Xin-liang; Liu, Dan

    2010-10-01

    Large number of experiments show that, due to external disturbances, the measured surface is too rough and other factors make use of laser Doppler technique to detect the vibration signal contained complex information, low SNR, resulting in Doppler frequency shift signals unmeasured, can not be demodulated Doppler phase and so on. This paper first analyzes the laser Doppler signal model and feature in the vibration test, and studies the most commonly used three ways of wavelet denoising techniques: the modulus maxima wavelet denoising method, the spatial correlation denoising method and wavelet threshold denoising method. Here we experiment with the vibration signals and achieve three ways by MATLAB simulation. Processing results show that the wavelet modulus maxima denoising method at low laser Doppler vibration SNR, has an advantage for the signal which mixed with white noise and contained more singularities; the spatial correlation denoising method is more suitable for denoising the laser Doppler vibration signal which noise level is not very high, and has a better edge reconstruction capacity; wavelet threshold denoising method has a wide range of adaptability, computational efficiency, and good denoising effect. Specifically, in the wavelet threshold denoising method, we estimate the original noise variance by spatial correlation method, using an adaptive threshold denoising method, and make some certain amendments in practice. Test can be shown that, compared with conventional threshold denoising, this method is more effective to extract the feature of laser Doppler vibration signal.

  7. Wavelet Approach for Operational Gamma Spectral Peak Detection - Preliminary Assessment

    SciTech Connect

    ,

    2012-02-01

    Gamma spectroscopy for radionuclide identifications typically involves locating spectral peaks and matching the spectral peaks with known nuclides in the knowledge base or database. Wavelet analysis, due to its ability for fitting localized features, offers the potential for automatic detection of spectral peaks. Past studies of wavelet technologies for gamma spectra analysis essentially focused on direct fitting of raw gamma spectra. Although most of those studies demonstrated the potentials of peak detection using wavelets, they often failed to produce new benefits to operational adaptations for radiological surveys. This work presents a different approach with the operational objective being to detect only the nuclides that do not exist in the environment (anomalous nuclides). With this operational objective, the raw-count spectrum collected by a detector is first converted to a count-rate spectrum and is then followed by background subtraction prior to wavelet analysis. The experimental results suggest that this preprocess is independent of detector type and background radiation, and is capable of improving the peak detection rates using wavelets. This process broadens the doors for a practical adaptation of wavelet technologies for gamma spectral surveying devices.

  8. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  9. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  10. Minimal multi-element stochastic collocation for uncertainty quantification of discontinuous functions

    SciTech Connect

    Jakeman, John D.; Narayan, Akil; Xiu, Dongbin

    2013-06-01

    We propose a multi-element stochastic collocation method that can be applied in high-dimensional parameter space for functions with discontinuities lying along manifolds of general geometries. The key feature of the method is that the parameter space is decomposed into multiple elements defined by the discontinuities and thus only the minimal number of elements are utilized. On each of the resulting elements the function is smooth and can be approximated using high-order methods with fast convergence properties. The decomposition strategy is in direct contrast to the traditional multi-element approaches which define the sub-domains by repeated splitting of the axes in the parameter space. Such methods are more prone to the curse-of-dimensionality because of the fast growth of the number of elements caused by the axis based splitting. The present method is a two-step approach. Firstly a discontinuity detector is used to partition parameter space into disjoint elements in each of which the function is smooth. The detector uses an efficient combination of the high-order polynomial annihilation technique along with adaptive sparse grids, and this allows resolution of general discontinuities with a smaller number of points when the discontinuity manifold is low-dimensional. After partitioning, an adaptive technique based on the least orthogonal interpolant is used to construct a generalized Polynomial Chaos surrogate on each element. The adaptive technique reuses all information from the partitioning and is variance-suppressing. We present numerous numerical examples that illustrate the accuracy, efficiency, and generality of the method. When compared against standard locally-adaptive sparse grid methods, the present method uses many fewer number of collocation samples and is more accurate.

  11. Collocational Strategies of Arab Learners of English: A Study in Lexical Semantics.

    ERIC Educational Resources Information Center

    Muhammad, Raji Zughoul; Abdul-Fattah, Hussein S.

    Arab learners of English encounter a serious problem with collocational sequences. The present study purports to determine the extent to which university English language majors can use English collocations properly. A two-form translation test of 16 Arabic collocations was administered to both graduate and undergraduate students of English. The…

  12. Beyond Single Words: The Most Frequent Collocations in Spoken English

    ERIC Educational Resources Information Center

    Shin, Dongkwang; Nation, Paul

    2008-01-01

    This study presents a list of the highest frequency collocations of spoken English based on carefully applied criteria. In the literature, more than forty terms have been used for designating multi-word units, which are generally not well defined. To avoid this confusion, six criteria are strictly applied. The ten million word BNC spoken section…

  13. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  14. Redefining Creativity--Analyzing Definitions, Collocations, and Consequences

    ERIC Educational Resources Information Center

    Kampylis, Panagiotis G.; Valtanen, Juri

    2010-01-01

    How holistically is human creativity defined, investigated, and understood? Until recently, most scientific research on creativity has focused on its positive side. However, creativity might not only be a desirable resource but also be a potential threat. In order to redefine creativity we need to analyze and understand definitions, collocations,…

  15. Evaluation of assumptions in soil moisture triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  16. Beyond triple collocation: Applications to satellite soil moisture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  17. Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation

    SciTech Connect

    Ismail, M. S.

    2010-09-30

    The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.

  18. Recent advances in (soil moisture) triple collocation analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To date, triple collocation (TC) analysis is one of the most important methods for the global scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method....

  19. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  20. L1 Influence on the Acquisition of L2 Collocations: Japanese ESL Users and EFL Learners Acquiring English Collocations

    ERIC Educational Resources Information Center

    Yamashita, Junko; Jiang, Nan

    2010-01-01

    This study investigated first language (L1) influence on the acquisition of second language (L2) collocations using a framework based on Kroll and Stewart (1994) and Jiang (2000), by comparing the performance on a phrase-acceptability judgment task among native speakers of English, Japanese English as a second language (ESL) users, and Japanese…

  1. Temi firthiani di linguistica applicata: "Restricted Languages" e "Collocation" (Firthian Themes in Applied Linguistics: "Restricted Languages" and "Collocation")

    ERIC Educational Resources Information Center

    Leonardi, Magda

    1977-01-01

    Discusses the importance of two Firthian themes for language teaching. The first theme, "Restricted Languages," concerns the "microlanguages" of every language (e.g., literary language, scientific, etc.). The second theme, "Collocation," shows that equivalent words in two languages rarely have the same position in both languages. (Text is in…

  2. Iterative PET Image Reconstruction Using Translation Invariant Wavelet Transform

    PubMed Central

    Zhou, Jian; Senhadji, Lotfi; Coatrieux, Jean-Louis; Luo, Limin

    2009-01-01

    The present work describes a Bayesian maximum a posteriori (MAP) method using a statistical multiscale wavelet prior model. Rather than using the orthogonal discrete wavelet transform (DWT), this prior is built on the translation invariant wavelet transform (TIWT). The statistical modeling of wavelet coefficients relies on the generalized Gaussian distribution. Image reconstruction is performed in spatial domain with a fast block sequential iteration algorithm. We study theoretically the TIWT MAP method by analyzing the Hessian of the prior function to provide some insights on noise and resolution properties of image reconstruction. We adapt the key concept of local shift invariance and explore how the TIWT MAP algorithm behaves with different scales. It is also shown that larger support wavelet filters do not offer better performance in contrast recovery studies. These theoretical developments are confirmed through simulation studies. The results show that the proposed method is more attractive than other MAP methods using either the conventional Gibbs prior or the DWT-based wavelet prior. PMID:21869846

  3. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  4. Wavelets in medical imaging

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  5. Wavelets in medical imaging

    SciTech Connect

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  6. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  7. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  8. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-01-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  9. An iterative finite-element collocation method for parabolic problems using domain decomposition

    SciTech Connect

    Curran, M.C.

    1992-11-01

    Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.

  10. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  11. Collocation methods for distillation design. 2: Applications for distillation

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    The authors present applications for a collocation method for modeling distillation columns that they developed in a companion paper. They discuss implementation of the model, including discussion of the ASCEND (Advanced System for Computations in ENgineering Design) system, which enables one to create complex models with simple building blocks and interactively learn to solve them. They first investigate applying the model to compute minimum reflux for a given separation task, exactly solving nonsharp and approximately solving sharp split minimum reflux problems. They next illustrate the use of the collocation model to optimize the design a single column capable of carrying out a prescribed set of separation tasks. The optimization picks the best column diameter and total number of trays. It also picks the feed tray for each of the prescribed separations.

  12. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  13. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  14. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  15. Spatial optimum collocation model of urban land and its algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangqiang; Li, Xinyun

    2007-06-01

    Optimizing the allocation of urban land is that layout and fix position the various types of land-use in space, maximize the overall benefits of urban space (including economic, social, environment) using a certain method and technique. There is two problems need to deal with in optimizing the allocation of urban land in the technique: one is the quantitative structure, the other is the space structure. In allusion to these problems, according to the principle of spatial coordination, a kind of new optimum collocation model about urban land was put forward in this text. In the model, we give a target function and a set of "soft" constraint conditions, and the area proportions of various types of land-use are restricted to the corresponding allowed scope. Spatial genetic algorithm is used to manipulate and calculate the space of urban land, the optimum spatial collocation scheme can be gradually approached, in which the three basic operations of reproduction, crossover and mutation are all operated on the space. Taking the built-up areas of Jinan as an example, we did the spatial optimum collocation experiment of urban land, the spatial aggregation of various types is better, and an approving result was got.

  16. A Collocation Method for Numerical Solutions of Coupled Burgers' Equations

    NASA Astrophysics Data System (ADS)

    Mittal, R. C.; Tripathi, A.

    2014-09-01

    In this paper, we propose a collocation-based numerical scheme to obtain approximate solutions of coupled Burgers' equations. The scheme employs collocation of modified cubic B-spline functions. We have used modified cubic B-spline functions for unknown dependent variables u, v, and their derivatives w.r.t. space variable x. Collocation forms of the partial differential equations result in systems of first-order ordinary differential equations (ODEs). In this scheme, we did not use any transformation or linearization method to handle nonlinearity. The obtained system of ODEs has been solved by strong stability preserving the Runge-Kutta method. The proposed scheme needs less storage space and execution time. The test problems considered in the literature have been discussed to demonstrate the strength and utility of the proposed scheme. The computed numerical solutions are in good agreement with the exact solutions and competent with those available in earlier studies. The scheme is simple as well as easy to implement. The scheme provides approximate solutions not only at the grid points, but also at any point in the solution range.

  17. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  18. A Corpus-Based Study of the Linguistic Features and Processes Which Influence the Way Collocations Are Formed: Some Implications for the Learning of Collocations

    ERIC Educational Resources Information Center

    Walker, Crayton Phillip

    2011-01-01

    In this article I examine the collocational behaviour of groups of semantically related verbs (e.g., "head, run, manage") and nouns (e.g., "issue, factor, aspect") from the domain of business English. The results of this corpus-based study show that much of the collocational behaviour exhibited by these lexical items can be explained by examining…

  19. Market turning points forecasting using wavelet analysis

    NASA Astrophysics Data System (ADS)

    Bai, Limiao; Yan, Sen; Zheng, Xiaolian; Chen, Ben M.

    2015-11-01

    Based on the system adaptation framework we previously proposed, a frequency domain based model is developed in this paper to forecast the major turning points of stock markets. This system adaptation framework has its internal model and adaptive filter to capture the slow and fast dynamics of the market, respectively. The residue of the internal model is found to contain rich information about the market cycles. In order to extract and restore its informative frequency components, we use wavelet multi-resolution analysis with time-varying parameters to decompose this internal residue. An empirical index is then proposed based on the recovered signals to forecast the market turning points. This index is successfully applied to US, UK and China markets, where all major turning points are well forecasted.

  20. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  1. A generalized wavelet extrema representation

    SciTech Connect

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  2. Finite element-wavelet hybrid algorithm for atmospheric tomography.

    PubMed

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2014-03-01

    Reconstruction of the refractive index fluctuations in the atmosphere, or atmospheric tomography, is an underlying problem of many next generation adaptive optics (AO) systems, such as the multiconjugate adaptive optics or multiobject adaptive optics (MOAO). The dimension of the problem for the extremely large telescopes, such as the European Extremely Large Telescope (E-ELT), suggests the use of iterative schemes as an alternative to the matrix-vector multiply (MVM) methods. Recently, an algorithm based on the wavelet representation of the turbulence has been introduced in [Inverse Probl.29, 085003 (2013)] by the authors to solve the atmospheric tomography using the conjugate gradient iteration. The authors also developed an efficient frequency-dependent preconditioner for the wavelet method in a later work. In this paper we study the computational aspects of the wavelet algorithm. We introduce three new techniques, the dual domain discretization strategy, a scale-dependent preconditioner, and a ground layer multiscale method, to derive a method that is globally O(n), parallelizable, and compact with respect to memory. We present the computational cost estimates and compare the theoretical numerical performance of the resulting finite element-wavelet hybrid algorithm with the MVM. The quality of the method is evaluated in terms of an MOAO simulation for the E-ELT on the European Southern Observatory (ESO) end-to-end simulation system OCTOPUS. The method is compared to the ESO version of the Fractal Iterative Method [Proc. SPIE7736, 77360X (2010)] in terms of quality.

  3. Finite element-wavelet hybrid algorithm for atmospheric tomography.

    PubMed

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2014-03-01

    Reconstruction of the refractive index fluctuations in the atmosphere, or atmospheric tomography, is an underlying problem of many next generation adaptive optics (AO) systems, such as the multiconjugate adaptive optics or multiobject adaptive optics (MOAO). The dimension of the problem for the extremely large telescopes, such as the European Extremely Large Telescope (E-ELT), suggests the use of iterative schemes as an alternative to the matrix-vector multiply (MVM) methods. Recently, an algorithm based on the wavelet representation of the turbulence has been introduced in [Inverse Probl.29, 085003 (2013)] by the authors to solve the atmospheric tomography using the conjugate gradient iteration. The authors also developed an efficient frequency-dependent preconditioner for the wavelet method in a later work. In this paper we study the computational aspects of the wavelet algorithm. We introduce three new techniques, the dual domain discretization strategy, a scale-dependent preconditioner, and a ground layer multiscale method, to derive a method that is globally O(n), parallelizable, and compact with respect to memory. We present the computational cost estimates and compare the theoretical numerical performance of the resulting finite element-wavelet hybrid algorithm with the MVM. The quality of the method is evaluated in terms of an MOAO simulation for the E-ELT on the European Southern Observatory (ESO) end-to-end simulation system OCTOPUS. The method is compared to the ESO version of the Fractal Iterative Method [Proc. SPIE7736, 77360X (2010)] in terms of quality. PMID:24690653

  4. Wavelet transform based on the optimal wavelet pairs for tunable diode laser absorption spectroscopy signal processing.

    PubMed

    Li, Jingsong; Yu, Benli; Fischer, Horst

    2015-04-01

    This paper presents a novel methodology-based discrete wavelet transform (DWT) and the choice of the optimal wavelet pairs to adaptively process tunable diode laser absorption spectroscopy (TDLAS) spectra for quantitative analysis, such as molecular spectroscopy and trace gas detection. The proposed methodology aims to construct an optimal calibration model for a TDLAS spectrum, regardless of its background structural characteristics, thus facilitating the application of TDLAS as a powerful tool for analytical chemistry. The performance of the proposed method is verified using analysis of both synthetic and observed signals, characterized with different noise levels and baseline drift. In terms of fitting precision and signal-to-noise ratio, both have been improved significantly using the proposed method.

  5. Spectral Laplace-Beltrami wavelets with applications in medical images.

    PubMed

    Tan, Mingzhen; Qiu, Anqi

    2015-05-01

    The spectral graph wavelet transform (SGWT) has recently been developed to compute wavelet transforms of functions defined on non-Euclidean spaces such as graphs. By capitalizing on the established framework of the SGWT, we adopt a fast and efficient computation of a discretized Laplace-Beltrami (LB) operator that allows its extension from arbitrary graphs to differentiable and closed 2-D manifolds (smooth surfaces embedded in the 3-D Euclidean space). This particular class of manifolds are widely used in bioimaging to characterize the morphology of cells, tissues, and organs. They are often discretized into triangular meshes, providing additional geometric information apart from simple nodes and weighted connections in graphs. In comparison with the SGWT, the wavelet bases constructed with the LB operator are spatially localized with a more uniform "spread" with respect to underlying curvature of the surface. In our experiments, we first use synthetic data to show that traditional applications of wavelets in smoothing and edge detectio can be done using the wavelet bases constructed with the LB operator. Second, we show that multi-resolutional capabilities of the proposed framework are applicable in the classification of Alzheimer's patients with normal subjects using hippocampal shapes. Wavelet transforms of the hippocampal shape deformations at finer resolutions registered higher sensitivity (96%) and specificity (90%) than the classification results obtained from the direct usage of hippocampal shape deformations. In addition, the Laplace-Beltrami method requires consistently a smaller number of principal components (to retain a fixed variance) at higher resolution as compared to the binary and weighted graph Laplacians, demonstrating the potential of the wavelet bases in adapting to the geometry of the underlying manifold.

  6. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  7. Wavelet Leaders: A new method to estimate the multifractal singularity spectra

    NASA Astrophysics Data System (ADS)

    Serrano, E.; Figliola, A.

    2009-07-01

    Wavelet Leaders is a novel alternative based on wavelet analysis for estimating the Multifractal Spectrum. It was proposed by Jaffard and co-workers improving the usual wavelet methods. In this work, we analyze and compare it with the well known Multifractal Detrended Fluctuation Analysis. The latter is a comprehensible and well adapted method for natural and weakly stationary signals. Alternatively, Wavelet Leaders exploits the wavelet self-similarity structures combined with the Multiresolution Analysis scheme. We give a brief introduction on the multifractal formalism and the particular implementation of the above methods and we compare their effectiveness. We expose several cases: Cantor measures, Binomial Multiplicative Cascades and also natural series from a tonic-clonic epileptic seizure. We analyze the results and extract the conclusions.

  8. An Introduction to Wavelet Theory and Analysis

    SciTech Connect

    Miner, N.E.

    1998-10-01

    This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.

  9. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    SciTech Connect

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  10. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  11. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  12. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  13. Why are wavelets so effective

    SciTech Connect

    Resnikoff, H.L. )

    1993-01-01

    The theory of compactly supported wavelets is now 4 yr old. In that short period, it has stimulated significant research in pure mathematics; has been the source of new numerical methods for the solution of nonlinear partial differential equations, including Navier-Stokes; and has been applied to digital signal-processing problems, ranging from signal detection and classification to signal compression for speech, audio, images, seismic signals, and sonar. Wavelet channel coding has even been proposed for code division multiple access digital telephony. In each of these applications, prototype wavelet solutions have proved to be competitive with established methods, and in many cases they are already superior.

  14. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  15. Wavelet entropy of stochastic processes

    NASA Astrophysics Data System (ADS)

    Zunino, L.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.

    2007-06-01

    We compare two different definitions for the wavelet entropy associated to stochastic processes. The first one, the normalized total wavelet entropy (NTWS) family [S. Blanco, A. Figliola, R.Q. Quiroga, O.A. Rosso, E. Serrano, Time-frequency analysis of electroencephalogram series, III. Wavelet packets and information cost function, Phys. Rev. E 57 (1998) 932-940; O.A. Rosso, S. Blanco, J. Yordanova, V. Kolev, A. Figliola, M. Schürmann, E. Başar, Wavelet entropy: a new tool for analysis of short duration brain electrical signals, J. Neurosci. Method 105 (2001) 65-75] and a second introduced by Tavares and Lucena [Physica A 357(1) (2005) 71-78]. In order to understand their advantages and disadvantages, exact results obtained for fractional Gaussian noise ( -1<α< 1) and fractional Brownian motion ( 1<α< 3) are assessed. We find out that the NTWS family performs better as a characterization method for these stochastic processes.

  16. Wavelet theory and its applications

    SciTech Connect

    Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.

  17. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  18. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  19. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    NASA Astrophysics Data System (ADS)

    Lee, Wei-Ming

    2012-05-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical-cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  20. Wavelet multiscale processing of remote sensing data

    NASA Astrophysics Data System (ADS)

    Bagmanov, Valeriy H.; Kharitonov, Svyatoslav V.; Meshkov, Ivan K.; Sultanov, Albert H.

    2008-12-01

    There is comparative analysis of methods for estimation and definition of Hoerst index (index of self-similarity) and comparative analysis of wavelet types using for image decomposition are offered. Five types of compared wavelets are used for analysis: Haar wavelets, Daubechies wavelets, Discrete Meyer wavelets, symplets and coiflets. Best quality of restored image Meyer and Haar wavelets demonstrate, because of they are characterised by minimal errors of recomposition. But compression index for these types smaller, than for Daubechies wavelets, symplets and coiflets. Contrariwise the latter obtain less precision of decompression. As it is necessary to take into consideration the complexity of realization some wavelet transformation on digital signal processors (DSP), simplest method is Haar wavelet transformation.

  1. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  2. Low-Oscillation Complex Wavelets

    NASA Astrophysics Data System (ADS)

    ADDISON, P. S.; WATSON, J. N.; FENG, T.

    2002-07-01

    In this paper we explore the use of two low-oscillation complex wavelets—Mexican hat and Morlet—as powerful feature detection tools for data analysis. These wavelets, which have been largely ignored to date in the scientific literature, allow for a decomposition which is more “temporal than spectral” in wavelet space. This is shown to be useful for the detection of small amplitude, short duration signal features which are masked by much larger fluctuations. Wavelet transform-based methods employing these wavelets (based on both wavelet ridges and modulus maxima) are developed and applied to sonic echo NDT signals used for the analysis of structural elements. A new mobility scalogram and associated reflectogram is defined for analysis of impulse response characteristics of structural elements and a novel signal compression technique is described in which the pertinent signal information is contained within a few modulus maxima coefficients. As an example of its usefulness, the signal compression method is employed as a pre-processor for a neural network classifier. The authors believe that low oscillation complex wavelets have wide applicability to other practical signal analysis problems. Their possible application to two such problems is discussed briefly—the interrogation of arrhythmic ECG signals and the detection and characterization of coherent structures in turbulent flow fields.

  3. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  4. Cross-Linguistic Influence: Its Impact on L2 English Collocation Production

    ERIC Educational Resources Information Center

    Phoocharoensil, Supakorn

    2013-01-01

    This research study investigated the influence of learners' mother tongue on their acquisition of English collocations. Having drawn the linguistic data from two groups of Thai EFL learners differing in English proficiency level, the researcher found that the native language (L1) plays a significant role in the participants' collocation learning…

  5. Towards a Learner Need-Oriented Second Language Collocation Writing Assistant

    ERIC Educational Resources Information Center

    Ramos, Margarita Alonso; Carlini, Roberto; Codina-Filbà, Joan; Orol, Ana; Vincze, Orsolya; Wanner, Leo

    2015-01-01

    The importance of collocations, i.e. idiosyncratic binary word co-occurrences in the context of second language learning has been repeatedly emphasized by scholars working in the field. Some went even so far as to argue that "vocabulary learning is collocation learning" (Hausmann, 1984, p. 395). Empirical studies confirm this…

  6. Going beyond Patterns: Involving Cognitive Analysis in the Learning of Collocations

    ERIC Educational Resources Information Center

    Liu, Dilin

    2010-01-01

    Since the late 1980s, collocations have received increasing attention in applied linguistics, especially language teaching, as is evidenced by the many publications on the topic. These works fall roughly into two lines of research (a) those focusing on the identification and use of collocations (Benson, 1989; Hunston, 2002; Hunston & Francis,…

  7. Triple collocation: beyond three estimates and separation of structural/non-structural errors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the “multiple” collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is s...

  8. Study on the Causes and Countermeasures of the Lexical Collocation Mistakes in College English

    ERIC Educational Resources Information Center

    Yan, Hansheng

    2010-01-01

    The lexical collocation in English is an important content in the linguistics theory, and also a research topic which is more and more emphasized in English teaching practice of China. The collocation ability of English decides whether learners could masterly use real English in effective communication. In many years' English teaching practice,…

  9. Symmetrical and Asymmetrical Scaffolding of L2 Collocations in the Context of Concordancing

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Marefat, Hamideh; Saeedakhtar, Afsaneh

    2015-01-01

    Collocational competence is recognized to be integral to native-like L2 performance, and concordancing can be of assistance in gaining this competence. This study reports on an investigation into the effect of symmetrical and asymmetrical scaffolding on the collocational competence of Iranian intermediate learners of English in the context of…

  10. Collocational Links in the L2 Mental Lexicon and the Influence of L1 Intralexical Knowledge

    ERIC Educational Resources Information Center

    Wolter, Brent; Gyllstad, Henrik

    2011-01-01

    This article assesses the influence of L1 intralexical knowledge on the formation of L2 intralexical collocations. Two tests, a primed lexical decision task (LDT) and a test of receptive collocational knowledge, were administered to a group of non-native speakers (NNSs) (L1 Swedish), with native speakers (NSs) of English serving as controls on the…

  11. Collocation, Semantic Prosody, and Near Synonymy: A Cross-Linguistic Perspective

    ERIC Educational Resources Information Center

    Xiao, Richard; McEnery, Tony

    2006-01-01

    This paper explores the collocational behaviour and semantic prosody of near synonyms from a cross-linguistic perspective. The importance of these concepts to language learning is well recognized. Yet while collocation and semantic prosody have recently attracted much interest from researchers studying the English language, there has been little…

  12. English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information

    ERIC Educational Resources Information Center

    Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji

    2012-01-01

    We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…

  13. Wavelet Transform for Real-Time Detection of Action Potentials in Neural Signals

    PubMed Central

    Quotb, Adam; Bornat, Yannick; Renaud, Sylvie

    2011-01-01

    We present a study on wavelet detection methods of neuronal action potentials (APs). Our final goal is to implement the selected algorithms on custom integrated electronics for on-line processing of neural signals; therefore we take real-time computing as a hard specification and silicon area as a price to pay. Using simulated neural signals including APs, we characterize an efficient wavelet method for AP extraction by evaluating its detection rate and its implementation cost. We compare software implementation for three methods: adaptive threshold, discrete wavelet transform (DWT), and stationary wavelet transform (SWT). We evaluate detection rate and implementation cost for detection functions dynamically comparing a signal with an adaptive threshold proportional to its SD, where the signal is the raw neural signal, respectively: (i) non-processed; (ii) processed by a DWT; (iii) processed by a SWT. We also use different mother wavelets and test different data formats to set an optimal compromise between accuracy and silicon cost. Detection accuracy is evaluated together with false negative and false positive detections. Simulation results show that for on-line AP detection implemented on a configurable digital integrated circuit, APs underneath the noise level can be detected using SWT with a well-selected mother wavelet, combined to an adaptive threshold. PMID:21811455

  14. Wavelet transform for real-time detection of action potentials in neural signals.

    PubMed

    Quotb, Adam; Bornat, Yannick; Renaud, Sylvie

    2011-01-01

    We present a study on wavelet detection methods of neuronal action potentials (APs). Our final goal is to implement the selected algorithms on custom integrated electronics for on-line processing of neural signals; therefore we take real-time computing as a hard specification and silicon area as a price to pay. Using simulated neural signals including APs, we characterize an efficient wavelet method for AP extraction by evaluating its detection rate and its implementation cost. We compare software implementation for three methods: adaptive threshold, discrete wavelet transform (DWT), and stationary wavelet transform (SWT). We evaluate detection rate and implementation cost for detection functions dynamically comparing a signal with an adaptive threshold proportional to its SD, where the signal is the raw neural signal, respectively: (i) non-processed; (ii) processed by a DWT; (iii) processed by a SWT. We also use different mother wavelets and test different data formats to set an optimal compromise between accuracy and silicon cost. Detection accuracy is evaluated together with false negative and false positive detections. Simulation results show that for on-line AP detection implemented on a configurable digital integrated circuit, APs underneath the noise level can be detected using SWT with a well-selected mother wavelet, combined to an adaptive threshold.

  15. Wavelet-based compression of medical images: filter-bank selection and evaluation.

    PubMed

    Saffor, A; bin Ramli, A R; Ng, K H

    2003-06-01

    Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

  16. Tests for Wavelets as a Basis Set

    NASA Astrophysics Data System (ADS)

    Baker, Thomas; Evenbly, Glen; White, Steven

    A wavelet transformation is a special type of filter usually reserved for image processing and other applications. We develop metrics to evaluate wavelets for general problems on test one-dimensional systems. The goal is to eventually use a wavelet basis in electronic structure calculations. We compare a variety of orthogonal wavelets such as coiflets, symlets, and daubechies wavelets. We also evaluate a new type of orthogonal wavelet with dilation factor three which is both symmetric and compact in real space. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC008696.

  17. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  18. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  19. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  20. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  1. The chain collocation method: A spectrally accurate calculus of forms

    NASA Astrophysics Data System (ADS)

    Rufat, Dzhelil; Mason, Gemma; Mullen, Patrick; Desbrun, Mathieu

    2014-01-01

    Preserving in the discrete realm the underlying geometric, topological, and algebraic structures at stake in partial differential equations has proven to be a fruitful guiding principle for numerical methods in a variety of fields such as elasticity, electromagnetism, or fluid mechanics. However, structure-preserving methods have traditionally used spaces of piecewise polynomial basis functions for differential forms. Yet, in many problems where solutions are smoothly varying in space, a spectral numerical treatment is called for. In an effort to provide structure-preserving numerical tools with spectral accuracy on logically rectangular grids over periodic or bounded domains, we present a spectral extension of the discrete exterior calculus (DEC), with resulting computational tools extending well-known collocation-based spectral methods. Its efficient implementation using fast Fourier transforms is provided as well.

  2. Probabilistic collocation for simulation-based robust concept exploration

    NASA Astrophysics Data System (ADS)

    Rippel, Markus; Choi, Seung-Kyum; Allen, Janet K.; Mistree, Farrokh

    2012-08-01

    In the early stages of an engineering design process it is necessary to explore the design space to find a feasible range that satisfies design requirements. When robustness of the system is among the requirements, the robust concept exploration method can be used. In this method, a global metamodel, such as a global response surface of the design space, is used to evaluate robustness. However, for large design spaces, this is computationally expensive and may be relatively inaccurate for some local regions. In this article, a method is developed for successively generating local response models at points of interest as the design space is explored. This approach is based on the probabilistic collocation method. Although the focus of this article is on the method, it is demonstrated using an artificial performance function and a linear cellular alloy heat exchanger. For these problems, this approach substantially reduces computation time while maintaining accuracy.

  3. Group-normalized wavelet packet signal processing

    NASA Astrophysics Data System (ADS)

    Shi, Zhuoer; Bao, Zheng

    1997-04-01

    Since the traditional wavelet and wavelet packet coefficients do not exactly represent the strength of signal components at the very time(space)-frequency tilling, group- normalized wavelet packet transform (GNWPT), is presented for nonlinear signal filtering and extraction from the clutter or noise, together with the space(time)-frequency masking technique. The extended F-entropy improves the performance of GNWPT. For perception-based image, soft-logic masking is emphasized to remove the aliasing with edge preserved. Lawton's method for complex valued wavelets construction is extended to generate the complex valued compactly supported wavelet packets for radar signal extraction. This kind of wavelet packets are symmetry and unitary orthogonal. Well-defined wavelet packets are chosen by the analysis remarks on their time-frequency characteristics. For real valued signal processing, such as images and ECG signal, the compactly supported spline or bi- orthogonal wavelet packets are preferred for perfect de- noising and filtering qualities.

  4. A Mellin transform approach to wavelet analysis

    NASA Astrophysics Data System (ADS)

    Alotta, Gioacchino; Di Paola, Mario; Failla, Giuseppe

    2015-11-01

    The paper proposes a fractional calculus approach to continuous wavelet analysis. Upon introducing a Mellin transform expression of the mother wavelet, it is shown that the wavelet transform of an arbitrary function f(t) can be given a fractional representation involving a suitable number of Riesz integrals of f(t), and corresponding fractional moments of the mother wavelet. This result serves as a basis for an original approach to wavelet analysis of linear systems under arbitrary excitations. In particular, using the proposed fractional representation for the wavelet transform of the excitation, it is found that the wavelet transform of the response can readily be computed by a Mellin transform expression, with fractional moments obtained from a set of algebraic equations whose coefficient matrix applies for any scale a of the wavelet transform. Robustness and computationally efficiency of the proposed approach are shown in the paper.

  5. Wavelet-based Evapotranspiration Forecasts

    NASA Astrophysics Data System (ADS)

    Bachour, R.; Maslova, I.; Ticlavilca, A. M.; McKee, M.; Walker, W.

    2012-12-01

    Providing a reliable short-term forecast of evapotranspiration (ET) could be a valuable element for improving the efficiency of irrigation water delivery systems. In the last decade, wavelet transform has become a useful technique for analyzing the frequency domain of hydrological time series. This study shows how wavelet transform can be used to access statistical properties of evapotranspiration. The objective of the research reported here is to use wavelet-based techniques to forecast ET up to 16 days ahead, which corresponds to the LANDSAT 7 overpass cycle. The properties of the ET time series, both physical and statistical, are examined in the time and frequency domains. We use the information about the energy decomposition in the wavelet domain to extract meaningful components that are used as inputs for ET forecasting models. Seasonal autoregressive integrated moving average (SARIMA) and multivariate relevance vector machine (MVRVM) models are coupled with the wavelet-based multiresolution analysis (MRA) results and used to generate short-term ET forecasts. Accuracy of the models is estimated and model robustness is evaluated using the bootstrap approach.

  6. Validation of significant wave height product from Envisat ASAR using triple collocation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Shi, C. Y.; Zhu, J. H.; Huang, X. Q.; Chen, C. T.

    2014-03-01

    Nowadays, spaceborne Synthetic Aperture Radar (SAR) has become a powerful tool for providing significant wave height. Traditionally, validation of SAR derived ocean wave height has been carried out against buoy measurements or model outputs, which only yield a inter-comparison, but not an 'absolute' validation. In this study, the triple collocation error model has been introduced in the validation of Envisat ASAR level 2 data. Significant wave height data from ASAR were validated against in situ buoy data, and wave model hindcast results from WaveWatch III, covering a period of six years. The impact of the collocation distance on the error of ASAR wave height was discussed. From the triple collocation validation analysis, it is found that the error of Envisat ASAR significant wave height product is linear to the collocation distance, and decrease with the decreasing collocation distance. Using the linear regression fit method, the absolute error of Envisat ASAR wave height was obtained with zero collocation distance. The absolute Envisat ASAR wave height error of 0.49m is presented in deep and open ocean from this triple collocation validation work.

  7. Wavelet Representation of Contour Sets

    SciTech Connect

    Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I

    2001-07-19

    We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.

  8. Wavelets for sign language translation

    NASA Astrophysics Data System (ADS)

    Wilson, Beth J.; Anspach, Gretel

    1993-10-01

    Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.

  9. Recent advances in wavelet technology

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.

  10. A parallel splitting wavelet method for 2D conservation laws

    NASA Astrophysics Data System (ADS)

    Schmidt, Alex A.; Kozakevicius, Alice J.; Jakobsson, Stefan

    2016-06-01

    The current work presents a parallel formulation using the MPI protocol for an adaptive high order finite difference scheme to solve 2D conservation laws. Adaptivity is achieved at each time iteration by the application of an interpolating wavelet transform in each space dimension. High order approximations for the numerical fluxes are computed by ENO and WENO schemes. Since time evolution is made by a TVD Runge-Kutta space splitting scheme, the problem is naturally suitable for parallelization. Numerical simulations and speedup results are presented for Euler equations in gas dynamics problems.

  11. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  12. Wavelet library for constrained devices

    NASA Astrophysics Data System (ADS)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  13. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  14. Numerical solution of differential-algebraic equations using the spline collocation-variation method

    NASA Astrophysics Data System (ADS)

    Bulatov, M. V.; Rakhvalov, N. P.; Solovarova, L. S.

    2013-03-01

    Numerical methods for solving initial value problems for differential-algebraic equations are proposed. The approximate solution is represented as a continuous vector spline whose coefficients are found using the collocation conditions stated for a subgrid with the number of collocation points less than the degree of the spline and the minimality condition for the norm of this spline in the corresponding spaces. Numerical results for some model problems are presented.

  15. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  16. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  17. Independent component analysis (ICA) using wavelet subband orthogonality

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Yamakawa, Takeshi

    1998-03-01

    There are two kinds of RRP: (1) invertible ones, such as global Fourier transform (FT), local wavelet transform (WT), and adaptive wavelet transform (AWT); and (2) non-invertible ones, e.g. ICA including the global principle component analysis (PCA). The invertible FT and WT can be related to the non-invertible ICA when the continuous transforms are approximate din discrete matrix-vector operations. The landmark accomplishment of ICA is to obtain, by unsupervised learning algorithm, the edge-map as image feature ayields, shown by Helsinki researchers using fourth order statistics of nyields -- Kurosis K(uyields), and derived from information- theoretical first principle is augmented by the orthogonality property of the DWT subband used necessarily for usual image compression. If we take the advantage of the subband decorrelation, we have potentially an efficient utilization of a pari of communication channels if we could send several more mixed subband images through the pair of channels.

  18. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  19. Multi-element probabilistic collocation method in high dimensions

    SciTech Connect

    Foo, Jasmine; Karniadakis, George Em

    2010-03-01

    We combine multi-element polynomial chaos with analysis of variance (ANOVA) functional decomposition to enhance the convergence rate of polynomial chaos in high dimensions and in problems with low stochastic regularity. Specifically, we employ the multi-element probabilistic collocation method MEPCM and so we refer to the new method as MEPCM-A. We investigate the dependence of the convergence of MEPCM-A on two decomposition parameters, the polynomial order {mu} and the effective dimension {nu}, with {nu}<=}{mu} for monotonic convergence of the method. We also employ MEPCM-A to obtain error bars for the piezometric head at the Hanford nuclear waste site under stochastic hydraulic conductivity conditions. Finally, we compare the cost of MEPCM-A against Monte Carlo in several hundred dimensions, and we find MEPCM-A to be more efficient for up to 600 dimensions for a specific multi-dimensional integration problem involving a discontinuous function.

  20. Uncertainty Quantification in State Estimation using the Probabilistic Collocation Method

    SciTech Connect

    Lin, Guang; Zhou, Ning; Ferryman, Thomas A.; Tuffner, Francis K.

    2011-03-23

    In this study, a new efficient uncertainty quantification technique, probabilistic collocation method (PCM) on sparse grid points is employed to enable the evaluation of uncertainty in state estimation. The PCM allows us to use just a small number of ensembles to quantify the uncertainty in estimating the state variables of power systems. By sparse grid points, the PCM approach can handle large number of uncertain parameters in power systems with relatively lower computational cost, when comparing with classic Monte Carlo (MC) simulations. The algorithm and procedure is outlined and we demonstrate the capability and illustrate the application of PCM on sparse grid points approach on uncertainty quantification in state estimation of the IEEE 14 bus model as an example. MC simulations have also been conducted to verify accuracy of the PCM approach. By comparing the results obtained from MC simulations with PCM results for mean and standard deviation of uncertain parameters, it is evident that the PCM approach is computationally more efficient than MC simulations.

  1. Recent advances in (soil moisture) triple collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Zwieback, S.; Crow, W.; Dorigo, W.; Wagner, W.

    2016-03-01

    To date, triple collocation (TC) analysis is one of the most important methods for the global-scale evaluation of remotely sensed soil moisture data sets. In this study we review existing implementations of soil moisture TC analysis as well as investigations of the assumptions underlying the method. Different notations that are used to formulate the TC problem are shown to be mathematically identical. While many studies have investigated issues related to possible violations of the underlying assumptions, only few TC modifications have been proposed to mitigate the impact of these violations. Moreover, assumptions, which are often understood as a limitation that is unique to TC analysis are shown to be common also to other conventional performance metrics. Noteworthy advances in TC analysis have been made in the way error estimates are being presented by moving from the investigation of absolute error variance estimates to the investigation of signal-to-noise ratio (SNR) metrics. Here we review existing error presentations and propose the combined investigation of the SNR (expressed in logarithmic units), the unscaled error variances, and the soil moisture sensitivities of the data sets as an optimal strategy for the evaluation of remotely-sensed soil moisture data sets.

  2. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  3. Optical wavelet transform for fingerprint identification

    NASA Astrophysics Data System (ADS)

    MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.

    1994-03-01

    The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.

  4. Analysis of photonic Doppler velocimetry data based on the continuous wavelet transform

    SciTech Connect

    Liu Shouxian; Wang Detian; Li Tao; Chen Guanghua; Li Zeren; Peng Qixian

    2011-02-15

    The short time Fourier transform (STFT) cannot resolve rapid velocity changes in most photonic Doppler velocimetry (PDV) data. A practical analysis method based on the continuous wavelet transform (CWT) was presented to overcome this difficulty. The adaptability of the wavelet family predicates that the continuous wavelet transform uses an adaptive time window to estimate the instantaneous frequency of signals. The local frequencies of signal are accurately determined by finding the ridge in the spectrogram of the CWT and then are converted to target velocity according to the Doppler effects. A performance comparison between the CWT and STFT is demonstrated by a plate-impact experiment data. The results illustrate that the new method is automatic and adequate for analysis of PDV data.

  5. New fuzzy wavelet network for modeling and control: The modeling approach

    NASA Astrophysics Data System (ADS)

    Ebadat, Afrooz; Noroozi, Navid; Safavi, Ali Akbar; Mousavi, Seyyed Hossein

    2011-08-01

    In this paper, a fuzzy wavelet network is proposed to approximate arbitrary nonlinear functions based on the theory of multiresolution analysis (MRA) of wavelet transform and fuzzy concepts. The presented network combines TSK fuzzy models with wavelet transform and ROLS learning algorithm while still preserve the property of linearity in parameters. In order to reduce the number of fuzzy rules, fuzzy clustering is invoked. In the clustering algorithm, those wavelets that are closer to each other in the sense of the Euclidean norm are placed in a group and are used in the consequent part of a fuzzy rule. Antecedent parts of the rules are Gaussian membership functions. Determination of the deviation parameter is performed with the help of gold partition method. Here, mean of each function is derived by averaging center of all wavelets that are related to that particular rule. The overall developed fuzzy wavelet network is called fuzzy wave-net and simulation results show superior performance over previous networks. The present work is complemented by a second part which focuses on the control aspects and to be published in this journal( [17]). This paper proposes an observer based self-structuring robust adaptive fuzzy wave-net (FWN) controller for a class of nonlinear uncertain multi-input multi-output systems.

  6. Portal imaging: Performance improvement in noise reduction by means of wavelet processing.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Larrey-Ruiz, Jorge; Bastida-Jumilla, María-Consuelo; Verdú-Monedero, Rafael

    2016-01-01

    This paper discusses the suitability, in terms of noise reduction, of various methods which can be applied to an image type often used in radiation therapy: the portal image. Among these methods, the analysis focuses on those operating in the wavelet domain. Wavelet-based methods tested on natural images--such as the thresholding of the wavelet coefficients, the minimization of the Stein unbiased risk estimator on a linear expansion of thresholds (SURE-LET), and the Bayes least-squares method using as a prior a Gaussian scale mixture (BLS-GSM method)--are compared with other methods that operate on the image domain--an adaptive Wiener filter and a nonlocal mean filter (NLM). For the assessment of the performance, the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), the Pearson correlation coefficient, and the Spearman rank correlation (ρ) coefficient are used. The performance of the wavelet filters and the NLM method are similar, but wavelet filters outperform the Wiener filter in terms of portal image denoising. It is shown how BLS-GSM and NLM filters produce the smoothest image, while keeping soft-tissue and bone contrast. As for the computational cost, filters using a decimated wavelet transform (decimated thresholding and SURE-LET) turn out to be the most efficient, with calculation times around 1 s. PMID:26602966

  7. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  8. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.

  9. Iterated oversampled filter banks and wavelet frames

    NASA Astrophysics Data System (ADS)

    Selesnick, Ivan W.; Sendur, Levent

    2000-12-01

    This paper takes up the design of wavelet tight frames that are analogous to Daubechies orthonormal wavelets - that is, the design of minimal length wavelet filters satisfying certain polynomial properties, but now in the oversampled case. The oversampled dyadic DWT considered in this paper is based on a single scaling function and tow distinct wavelets. Having more wavelets than necessary gives a closer spacing between adjacent wavelets within the same scale. As a result, the transform is nearly shift-invariant, and can be used to improve denoising. Because the associated time- frequency lattice preserves the dyadic structure of the critically sampled DWT it can be used with tree-based denoising algorithms that exploit parent-child correlation.

  10. Wavelet analysis in two-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Burkovets, Dimitry N.

    2002-02-01

    The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.

  11. Wavelet analysis of epileptic spikes

    NASA Astrophysics Data System (ADS)

    Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-05-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  12. Haar Wavelet Analysis of Climatic Time Series

    NASA Astrophysics Data System (ADS)

    Zhang, Zhihua; Moore, John; Grinsted, Aslak

    2014-05-01

    In order to extract the intrinsic information of climatic time series from background red noise, we will first give an analytic formula on the distribution of Haar wavelet power spectra of red noise in a rigorous statistical framework. The relation between scale aand Fourier period T for the Morlet wavelet is a= 0.97T . However, for Haar wavelet, the corresponding formula is a= 0.37T . Since for any time series of time step δt and total length Nδt, the range of scales is from the smallest resolvable scale 2δt to the largest scale Nδt in wavelet-based time series analysis, by using the Haar wavelet analysis, one can extract more low frequency intrinsic information. Finally, we use our method to analyze Arctic Oscillation which is a key aspect of climate variability in the Northern Hemisphere, and discover a great change in fundamental properties of the AO,-commonly called a regime shift or tripping point. Our partial results have been published as follows: [1] Z. Zhang, J.C. Moore and A. Grinsted, Haar wavelet analysis of climatic time series, Int. J. Wavelets, Multiresol. & Inf. Process., in press, 2013 [2] Z. Zhang, J.C. Moore, Comment on "Significance tests for the wavelet power and the wavelet power spectrum", Ann. Geophys., 30:12, 2012

  13. Entangled Husimi Distribution and Complex Wavelet Transformation

    NASA Astrophysics Data System (ADS)

    Hu, Li-Yun; Fan, Hong-Yi

    2010-05-01

    Similar in spirit to the preceding work (Int. J. Theor. Phys. 48:1539, 2009) where the relationship between wavelet transformation and Husimi distribution function is revealed, we study this kind of relationship to the entangled case. We find that the optical complex wavelet transformation can be used to study the entangled Husimi distribution function in phase space theory of quantum optics. We prove that, up to a Gaussian function, the entangled Husimi distribution function of a two-mode quantum state | ψ> is just the modulus square of the complex wavelet transform of e^{-\\vert η \\vert 2/2} with ψ( η) being the mother wavelet.

  14. Wavelet Analysis of Soil Reflectance for the Characterization of Soil Properties

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Wavelet analysis has proven to be effective in many fields including signal processing and digital image analysis. Recently, it has been adapted to spectroscopy, where the reflectance of various materials is measured with respect to wavelength (nm) or wave number (cm-1). Spectra can cover broad wave...

  15. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  16. Denoising seismic data using wavelet methods: a comparison study

    NASA Astrophysics Data System (ADS)

    Hloupis, G.; Vallianatos, F.

    2009-04-01

    In order to derive onset times, amplitudes or other useful characteristic from a seismogram, the usual denoising procedure involves the use of a linear pass-band filter. This family of filters is zero-phase and is useful according to phase properties but their efficiency is reduced when transients are existing near seismic signals. The alternative solution is the Wiener filter which focuses on the elimination of the mean square error between recorded and expected signal. Its main disadvantage is the assumption that signal and noise are stationary. This assumption does not hold for the seismic signals leading to denoising solutions that does not assume stationarity. Solutions based on Wavelet Transform proved effective for denoising problems across several areas. Here we present recent WT denoising methods (WDM) that will applied later to seismic sequences of Seismological Network of Crete. Wavelet denoising schemes have proved to be well adapted to several types of signals. For non-stationary signals, such as seismograms, the use of linear and non-linear wavelet denoising methods seems promising. The contribution of this study is a comparison for wavelet denoising methods suitable for seismic signals, which proved from previous studies their superiority against appropriate conventional filtering techniques. The importance of wavelet denoising methods relies on two facts: they recovered the seismic signals having fewer artifacts than conventional filters (for high SNR seismograms) and at the same time they can provide satisfactory representations (for detecting the earthquake's primary arrival) for low SNR seismograms or microearthquakes. The latter is very important for a possible development of an automatic procedure for the regular daily detection of small or non-regional earthquakes especially when the number of the stations is quite big. Initially, their performance is measured over a database of synthetic seismic signals in order to evaluate the better wavelet

  17. THE LOSS OF ACCURACY OF STOCHASTIC COLLOCATION METHOD IN SOLVING NONLINEAR DIFFERENTIAL EQUATIONS WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S

    2013-01-01

    n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.

  18. Wavelet-based pavement image compression and noise reduction

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2005-08-01

    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  19. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  20. A space-time collocation scheme for modified anomalous subdiffusion and nonlinear superdiffusion equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.

    2016-01-01

    This paper reports a new spectral collocation technique for solving time-space modified anomalous subdiffusion equation with a nonlinear source term subject to Dirichlet and Neumann boundary conditions. This model equation governs the evolution for the probability density function that describes anomalously diffusing particles. Anomalous diffusion is ubiquitous in physical and biological systems where trapping and binding of particles can occur. A space-time Jacobi collocation scheme is investigated for solving such problem. The main advantage of the proposed scheme is that, the shifted Jacobi Gauss-Lobatto collocation and shifted Jacobi Gauss-Radau collocation approximations are employed for spatial and temporal discretizations, respectively. Thereby, the problem is successfully reduced to a system of algebraic equations. The numerical results obtained by this algorithm have been compared with various numerical methods in order to demonstrate the high accuracy and efficiency of the proposed method. Indeed, for relatively limited number of Gauss-Lobatto and Gauss-Radau collocation nodes imposed, the absolute error in our numerical solutions is sufficiently small. The results have been compared with other techniques in order to demonstrate the high accuracy and efficiency of the proposed method.

  1. Miniaturized Multi-Band Antenna via Element Collocation

    SciTech Connect

    Martin, R P

    2012-06-01

    The resonant frequency of a microstrip patch antenna may be reduced through the addition of slots in the radiating element. Expanding upon this concept in favor of a significant reduction in the tuned width of the radiator, nearly 60% of the antenna metallization is removed, as seen in the top view of the antenna’s radiating element (shown in red, below, left). To facilitate an increase in the gain of the antenna, the radiator is suspended over the ground plane (green) by an air substrate at a height of 0.250" while being mechanically supported by 0.030" thick Rogers RO4003 laminate in the same profile as the element. Although the entire surface of the antenna (red) provides 2.45 GHz operation with insignificant negative effects on performance after material removal, the smaller square microstrip in the middle must be isolated from the additional aperture in order to afford higher frequency operation. A low insertion loss path centered at 2.45 GHz may simultaneously provide considerable attenuation at additional frequencies through the implementation of a series-parallel, resonant reactive path. However, an inductive reactance alone will not permit lower frequency energy to propagate across the intended discontinuity. To mitigate this, a capacitance is introduced in series with the inductor, generating a resonance at 2.45 GHz with minimum forward transmission loss. Four of these reactive pairs are placed between the coplanar elements as shown. Therefore, the aperture of the lower-frequency outer segment includes the smaller radiator while the higher frequency section is isolated from the additional material. In order to avoid cross-polarization losses due to the orientation of a transmitter or receiver in reference to the antenna, circular polarization is realized by a quadrature coupler for each collocated antenna as seen in the bottom view of the antenna (right). To generate electromagnetic radiation concentrically rotating about the direction of propagation

  2. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems. PMID:24880269

  3. Daubechies wavelets for linear scaling density functional theory

    SciTech Connect

    Mohr, Stephan; Ratcliff, Laura E.; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Boulanger, Paul; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10 000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  4. A new wavelet-based thin plate element using B-spline wavelet on the interval

    NASA Astrophysics Data System (ADS)

    Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang

    2008-01-01

    By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.

  5. On the Stability of Collocated Controllers in the Presence or Uncertain Nonlinearities and Other Perils

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1985-01-01

    Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.

  6. Fast Spectral Collocation Method for Surface Integral Equations of Potential Problems in a Spheroid

    PubMed Central

    Xu, Zhenli; Cai, Wei

    2009-01-01

    This paper proposes a new technique to speed up the computation of the matrix of spectral collocation discretizations of surface single and double layer operators over a spheroid. The layer densities are approximated by a spectral expansion of spherical harmonics and the spectral collocation method is then used to solve surface integral equations of potential problems in a spheroid. With the proposed technique, the computation cost of collocation matrix entries is reduced from 𝒪(M2N4) to 𝒪(MN4), where N2 is the number of spherical harmonics (i.e., size of the matrix) and M is the number of one-dimensional integration quadrature points. Numerical results demonstrate the spectral accuracy of the method. PMID:20414359

  7. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    SciTech Connect

    Liu, Yi-Xin Zhang, Hong-Dong

    2014-06-14

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  8. Exponential time differencing methods with Chebyshev collocation for polymers confined by interacting surfaces

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Xin; Zhang, Hong-Dong

    2014-06-01

    We present a fast and accurate numerical method for the self-consistent field theory calculations of confined polymer systems. It introduces an exponential time differencing method (ETDRK4) based on Chebyshev collocation, which exhibits fourth-order accuracy in temporal domain and spectral accuracy in spatial domain, to solve the modified diffusion equations. Similar to the approach proposed by Hur et al. [Macromolecules 45, 2905 (2012)], non-periodic boundary conditions are adopted to model the confining walls with or without preferential interactions with polymer species, avoiding the use of surface field terms and the mask technique in a conventional approach. The performance of ETDRK4 is examined in comparison with the operator splitting methods with either Fourier collocation or Chebyshev collocation. Numerical experiments show that our exponential time differencing method is more efficient than the operator splitting methods in high accuracy calculations. This method has been applied to diblock copolymers confined by two parallel flat surfaces.

  9. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  10. Fast fractal image compression with triangulation wavelets

    NASA Astrophysics Data System (ADS)

    Hebert, D. J.; Soundararajan, Ezekiel

    1998-10-01

    We address the problem of improving the performance of wavelet based fractal image compression by applying efficient triangulation methods. We construct iterative function systems (IFS) in the tradition of Barnsley and Jacquin, using non-uniform triangular range and domain blocks instead of uniform rectangular ones. We search for matching domain blocks in the manner of Zhang and Chen, performing a fast wavelet transform on the blocks and eliminating low resolution mismatches to gain speed. We obtain further improvements by the efficiencies of binary triangulations (including the elimination of affine and symmetry calculations and reduced parameter storage), and by pruning the binary tree before construction of the IFS. Our wavelets are triangular Haar wavelets and `second generation' interpolation wavelets as suggested by Sweldens' recent work.

  11. On the anomaly of velocity-pressure decoupling in collocated mesh solutions

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook; Vanoverbeke, Thomas

    1991-01-01

    The use of various pressure correction algorithms originally developed for fully staggered meshes can yield a velocity-pressure decoupled solution for collocated meshes. The mechanism that causes velocity-pressure decoupling is identified. It is shown that the use of a partial differential equation for the incremental pressure eliminates such a mechanism and yields a velocity-pressure coupled solution. Example flows considered are a three dimensional lid-driven cavity flow and a laminar flow through a 90 deg bend square duct. Numerical results obtained using the collocated mesh are in good agreement with the measured data and other numerical results.

  12. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    SciTech Connect

    Kim, Sang Dong

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  13. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  14. CW-THz image contrast enhancement using wavelet transform and Retinex

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Zhang, Min; Hu, Qi-fan; Huang, Ying-Xue; Liang, Hua-Wei

    2015-10-01

    To enhance continuous wave terahertz (CW-THz) scanning images contrast and denoising, a method based on wavelet transform and Retinex theory was proposed. In this paper, the factors affecting the quality of CW-THz images were analysed. Second, an approach of combination of the discrete wavelet transform (DWT) and a designed nonlinear function in wavelet domain for the purpose of contrast enhancing was applied. Then, we combine the Retinex algorithm for further contrast enhancement. To evaluate the effectiveness of the proposed method in qualitative and quantitative, it was compared with the adaptive histogram equalization method, the homomorphic filtering method and the SSR(Single-Scale-Retinex) method. Experimental results demonstrated that the presented algorithm can effectively enhance the contrast of CW-THZ image and obtain better visual effect.

  15. Formulaic Language and Collocations in German Essays: From Corpus-Driven Data to Corpus-Based Materials

    ERIC Educational Resources Information Center

    Krummes, Cedric; Ensslin, Astrid

    2015-01-01

    Whereas there exists a plethora of research on collocations and formulaic language in English, this article contributes towards a somewhat less developed area: the understanding and teaching of formulaic language in German as a foreign language. It analyses formulaic sequences and collocations in German writing (corpus-driven) and provides modern…

  16. An Automatic Collocation Writing Assistant for Taiwanese EFL Learners: A Case of Corpus-Based NLP Technology

    ERIC Educational Resources Information Center

    Chang, Yu-Chia; Chang, Jason S.; Chen, Hao-Jan; Liou, Hsien-Chin

    2008-01-01

    Previous work in the literature reveals that EFL learners were deficient in collocations that are a hallmark of near native fluency in learner's writing. Among different types of collocations, the verb-noun (V-N) one was found to be particularly difficult to master, and learners' first language was also found to heavily influence their collocation…

  17. The Challenge of English Language Collocation Learning in an ES/FL Environment: PRC Students in Singapore

    ERIC Educational Resources Information Center

    Ying, Yang

    2015-01-01

    This study aimed to seek an in-depth understanding about English collocation learning and the development of learner autonomy through investigating a group of English as a Second Language (ESL) learners' perspectives and practices in their learning of English collocations using an AWARE approach. A group of 20 PRC students learning English in…

  18. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  19. Application of Conjunctive Nonlinear Model Based on Wavelet Transforms and Artificial Neural Networks to Drought Forecasting

    NASA Astrophysics Data System (ADS)

    Abrishamchi, A.; Mehdikhani, H.; Tajrishy, M.; Marino, M. A.; Abrishamchi, A.

    2007-12-01

    Drought forecasting plays an important role in mitigation of economic, environmental and social impacts of drought. Traditional statistical time series methods have a limited ability to capture non-stationarities and nonlinearities in data. Artificial Neural Network (ANN) because of highly flexible function estimator that has self- learning and self-adaptive feature has shown great ability in forecasting nonlinear and nonstationary time series in hydrology. Recently wavelet transforms have become a common tool for analyzing local variation in time series. Wavelet transforms provide a useful decomposition of a signal, or time series; therefore, hybrid models have been proposed for forecasting a time series based on a wavelet transform preprocessing. Wavelet-transformed data aids in improving the ability of forecasting models by diagnosing signal's main frequency component and abstract local information of the original time series on various resolution levels. This paper presents a conjunctive nonlinear model using Wavelet Transforms and Artificial Neural Network. Application of the model in Zayandeh-Rood River basin (Iran) shows that the conjunctive model significantly improves the ability of artificial neural networks for 1, 3, 6 and 9 months ahead forecasting of EDI (effective drought indices) time series. Improved forecasts allow water resources decision makers to develop drought preparedness plans far in advance.

  20. Directional dual-tree complex wavelet packet transforms for processing quadrature signals.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2016-03-01

    Quadrature signals containing in-phase and quadrature-phase components are used in many signal processing applications in every field of science and engineering. Specifically, Doppler ultrasound systems used to evaluate cardiovascular disorders noninvasively also result in quadrature format signals. In order to obtain directional blood flow information, the quadrature outputs have to be preprocessed using methods such as asymmetrical and symmetrical phasing filter techniques. These resultant directional signals can be employed in order to detect asymptomatic embolic signals caused by small emboli, which are indicators of a possible future stroke, in the cerebral circulation. Various transform-based methods such as Fourier and wavelet were frequently used in processing embolic signals. However, most of the times, the Fourier and discrete wavelet transforms are not appropriate for the analysis of embolic signals due to their non-stationary time-frequency behavior. Alternatively, discrete wavelet packet transform can perform an adaptive decomposition of the time-frequency axis. In this study, directional discrete wavelet packet transforms, which have the ability to map directional information while processing quadrature signals and have less computational complexity than the existing wavelet packet-based methods, are introduced. The performances of proposed methods are examined in detail by using single-frequency, synthetic narrow-band, and embolic quadrature signals.

  1. Fast multi-scale edge detection algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zang, Jie; Song, Yanjun; Li, Shaojuan; Luo, Guoyun

    2011-11-01

    The traditional edge detection algorithms have certain noise amplificat ion, making there is a big error, so the edge detection ability is limited. In analysis of the low-frequency signal of image, wavelet analysis theory can reduce the time resolution; under high time resolution for high-frequency signal of the image, it can be concerned about the transient characteristics of the signal to reduce the frequency resolution. Because of the self-adaptive for signal, the wavelet transform can ext ract useful informat ion from the edge of an image. The wavelet transform is at various scales, wavelet transform of each scale provides certain edge informat ion, so called mult i-scale edge detection. Multi-scale edge detection is that the original signal is first polished at different scales, and then detects the mutation of the original signal by the first or second derivative of the polished signal, and the mutations are edges. The edge detection is equivalent to signal detection in different frequency bands after wavelet decomposition. This article is use of this algorithm which takes into account both details and profile of image to detect the mutation of the signal at different scales, provided necessary edge information for image analysis, target recognition and machine visual, and achieved good results.

  2. Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

    PubMed

    Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

    2008-10-01

    The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.

  3. Imaging system of wavelet optics described by the Gaussian linear frequency-modulated complex wavelet.

    PubMed

    Tan, Liying; Ma, Jing; Wang, Guangming

    2005-12-01

    The image formation and the point-spread function of an optical system are analyzed by use of the wavelet basis function. The image described by a wavelet is no longer an indivisible whole image. It is, rather, a complex image consisting of many wavelet subimages, which come from the changes of different parameters (scale) a and c, and parameters b and d show the positions of wavelet subimages under different scales. A Gaussian frequency-modulated complex-valued wavelet function is introduced to express the point-spread function of an optical system and used to describe the image formation. The analysis, in allusion to the situation of illumination with a monochromatic plain light wave, shows that using the theory of wavelet optics to describe the image formation of an optical system is feasible.

  4. Imaging system of wavelet optics described by the Gaussian linear frequency-modulated complex wavelet

    NASA Astrophysics Data System (ADS)

    Tan, Liying; Ma, Jing; Wang, Guangming

    2005-12-01

    The image formation and the point-spread function of an optical system are analyzed by use of the wavelet basis function. The image described by a wavelet is no longer an indivisible whole image. It is, rather, a complex image consisting of many wavelet subimages, which come from the changes of different parameters (scale) a and c, and parameters b and d show the positions of wavelet subimages under different scales. A Gaussian frequency-modulated complex-valued wavelet function is introduced to express the point-spread function of an optical system and used to describe the image formation. The analysis, in allusion to the situation of illumination with a monochromatic plain light wave, shows that using the theory of wavelet optics to describe the image formation of an optical system is feasible.

  5. Applications of a fast, continuous wavelet transform

    SciTech Connect

    Dress, W.B.

    1997-02-01

    A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.

  6. Seismic porosity mapping in the Ekofisk Field using a new form of collocated cokriging

    SciTech Connect

    Doyen, P.M.; Boer, L.D. den; Pillet, W.R.

    1996-12-31

    An important practical problem in the geosciences is the integration of seismic attribute information in subsurface mapping applications. The aim is to utilize a more densely sampled secondary variable such as seismic impedance to guide the interpolation of a related primary variable such as porosity. The collocated cokriging technique was recently introduced to facilitate the integration process. Here we propose a simplified implementation of collocated cokriging based on a Bayesian updating rule. We demonstrate that the cokriging estimate at one point can be obtained by direct updating of the kriging estimate with the collocated secondary data. The linear update only requires knowledge of the kriging variance and the coefficient(s) of correlation between primary and secondary variables. No cokriging system need be solved and no reference to spatial cross-covariances is required. The new form of collocated cokriging is applied to predict the lateral variations of porosity in a reservoir layer of the Ekofisk Field, Norwegian North Sea. A cokriged porosity map is obtained by combining zone average porosity data at more than one hundred wells and acoustic impedance information extracted from a 3-D seismic survey. Utilization of the seismic information yields a more detailed and reliable image of the porosity distribution along the flanks of the producing structure.

  7. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  8. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  9. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  10. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  11. 47 CFR Appendix B to Part 1 - Nationwide Programmatic Agreement for the Collocation of Wireless Antennas

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... categories described in the FCC's rules (47 CFR 1.1307), including situations which may affect historical... CFR 800.14(b)), allows for programmatic agreements to streamline and tailor the Section 106 review... Collocation Programmatic Agreement in accordance with 36 CFR 800.14(b) to address the Section 106...

  12. Your Participation Is "Greatly/Highly" Appreciated: Amplifier Collocations in L2 English

    ERIC Educational Resources Information Center

    Edmonds, Amanda; Gudmestad, Aarnes

    2014-01-01

    The current study sets out to investigate collocational knowledge for a set of 13 English amplifiers among native and nonnative speakers of English, by providing a partial replication of one of the projects reported on in Granger (1998). The project combines both phraseological and distributional approaches to research into formulaic language to…

  13. Frequent Collocates and Major Senses of Two Prepositions in ESL and ENL Corpora

    ERIC Educational Resources Information Center

    Nkemleke, Daniel

    2009-01-01

    This contribution assesses in quantitative terms frequent collocates and major senses of "between" and "through" in the corpus of Cameroonian English (CCE), the corpus of East-African (Kenya and Tanzania) English which is part of the International Corpus of English (ICE) project (ICE-EA), and the London Oslo/Bergen (LOB) corpus of British English.…

  14. Investigation of Native Speaker and Second Language Learner Intuition of Collocation Frequency

    ERIC Educational Resources Information Center

    Siyanova-Chanturia, Anna; Spina, Stefania

    2015-01-01

    Research into frequency intuition has focused primarily on native (L1) and, to a lesser degree, nonnative (L2) speaker intuitions about single word frequency. What remains a largely unexplored area is L1 and L2 intuitions about collocation (i.e., phrasal) frequency. To bridge this gap, the present study aimed to answer the following question: How…

  15. The Effect of Corpus-Based Activities on Verb-Noun Collocations in EFL Classes

    ERIC Educational Resources Information Center

    Ucar, Serpil; Yükselir, Ceyhun

    2015-01-01

    This current study sought to reveal the impacts of corpus-based activities on verb-noun collocation learning in EFL classes. This study was carried out on two groups--experimental and control groups- each of which consists of 15 students. The students were preparatory class students at School of Foreign Languages, Osmaniye Korkut Ata University.…

  16. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuators and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  17. Shape Control of Plates with Piezo Actuators and Collocated Position/Rate Sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    This paper treats the control problem of shaping the surface deformation of a circular plate using embedded piezo-electric actuator and collocated rate sensors. An explicit Linear Quadratic Gaussian (LQG) optimizer stability augmentation compensator is derived as well as the optimal feed-forward control. Corresponding performance evaluation formulas are also derived.

  18. Collocational Processing in Light of the Phraseological Continuum Model: Does Semantic Transparency Matter?

    ERIC Educational Resources Information Center

    Gyllstad, Henrik; Wolter, Brent

    2016-01-01

    The present study investigates whether two types of word combinations (free combinations and collocations) differ in terms of processing by testing Howarth's Continuum Model based on word combination typologies from a phraseological tradition. A visual semantic judgment task was administered to advanced Swedish learners of English (n = 27) and…

  19. The Role of Language for Thinking and Task Selection in EFL Learners' Oral Collocational Production

    ERIC Educational Resources Information Center

    Wang, Hung-Chun; Shih, Su-Chin

    2011-01-01

    This study investigated how English as a foreign language (EFL) learners' types of language for thinking and types of oral elicitation tasks influence their lexical collocational errors in speech. Data were collected from 42 English majors in Taiwan using two instruments: (1) 3 oral elicitation tasks and (2) an inner speech questionnaire. The…

  20. Strategies in Translating Collocations in Religious Texts from Arabic into English

    ERIC Educational Resources Information Center

    Dweik, Bader S.; Shakra, Mariam M. Abu

    2010-01-01

    The present study investigated the strategies adopted by students in translating specific lexical and semantic collocations in three religious texts namely, the Holy Quran, the Hadith and the Bible. For this purpose, the researchers selected a purposive sample of 35 MA translation students enrolled in three different public and private Jordanian…

  1. Verb-Noun Collocations in Second Language Writing: A Corpus Analysis of Learners' English

    ERIC Educational Resources Information Center

    Laufer, Batia; Waldman, Tina

    2011-01-01

    The present study investigates the use of English verb-noun collocations in the writing of native speakers of Hebrew at three proficiency levels. For this purpose, we compiled a learner corpus that consists of about 300,000 words of argumentative and descriptive essays. For comparison purposes, we selected LOCNESS, a corpus of young adult native…

  2. Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers

    ERIC Educational Resources Information Center

    Sadeghi, Karim

    2009-01-01

    Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…

  3. Action Research: Applying a Bilingual Parallel Corpus Collocational Concordancer to Taiwanese Medical School EFL Academic Writing

    ERIC Educational Resources Information Center

    Reynolds, Barry Lee

    2016-01-01

    Lack of knowledge in the conventional usage of collocations in one's respective field of expertise cause Taiwanese students to produce academic writing that is markedly different than more competent writing. This is because Taiwanese students are first and foremost English as a Foreign language (EFL) readers and may have difficulties picking up on…

  4. Wavelet-based multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Dian-Ting; Zhou, Xiao-Dan; Wang, Cheng-Wen

    2008-09-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (IR) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  5. Wavelet Applications for Flight Flutter Testing

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.

    1999-01-01

    Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.

  6. Application of wavelet analysis on thin-film wideband monitoring system

    NASA Astrophysics Data System (ADS)

    Han, Jun; Wang, Song; Shang, Xiaoyan; An, Yuying

    2009-05-01

    In the thin-film thickness wideband monitoring system, an accurate measurement of the spectral intensity signal is critical to improving precision of coating. Because of electron gun, ion source and baking, vacuum chamber is a complex environment with background light. Together with the inherent noise of linear CCD and quantization noise of A/D conversion, which are main factors affecting accurate measurement of spectrum intensity. This paper uses time and frequency multi-resolution properties of wavelet transform, adaptive threshold adjustment method is designed. According to the different characteristics of signal and the random noise processed by wavelet transform in different scales, the fine adjustment factor is added when the threshold is determined, on the hand, which makes the adaptive threshold of wavelet coefficient with positive Lipschitz index decrease, this is beneficial to preserve real signals of wavelet coefficient; on the other hand, which makes the one with negative Lipschitz index increase, this is favorable to filter out noise signal. By this method, both rejecting true probability and false declaration probability are reduced, the random noise is suppressed effectively, a very good filtering result is achieved, and finally the analysis accuracy of spectrum signal and the precision of systematic decision pause are improved.

  7. A novel approach for removing ECG interferences from surface EMG signals using a combined ANFIS and wavelet.

    PubMed

    Abbaspour, Sara; Fallah, Ali; Lindén, Maria; Gholamhosseini, Hamid

    2016-02-01

    In recent years, the removal of electrocardiogram (ECG) interferences from electromyogram (EMG) signals has been given large consideration. Where the quality of EMG signal is of interest, it is important to remove ECG interferences from EMG signals. In this paper, an efficient method based on a combination of adaptive neuro-fuzzy inference system (ANFIS) and wavelet transform is proposed to effectively eliminate ECG interferences from surface EMG signals. The proposed approach is compared with other common methods such as high-pass filter, artificial neural network, adaptive noise canceller, wavelet transform, subtraction method and ANFIS. It is found that the performance of the proposed ANFIS-wavelet method is superior to the other methods with the signal to noise ratio and relative error of 14.97dB and 0.02 respectively and a significantly higher correlation coefficient (p<0.05). PMID:26643795

  8. [Spatio-Temporal Bioelectrical Brain Activity Organization during Reading Syntagmatic and Paradigmatic Collocations by Students with Different Foreign Language Proficiency].

    PubMed

    Sokolova, L V; Cherkasova, A S

    2015-01-01

    Texts or words/pseudowords are often used as stimuli for human verbal activity research. Our study pays attention to decoding processes of grammatical constructions consisted of two-three words--collocations. Russian and English collocation sets without any narrative were presented to Russian-speaking students with different English language skill. Stimulus material had two types of collocations: paradigmatic and syntagmatic. 30 students (average age--20.4 ± 0.22) took part in the study, they were divided into two equal groups depending on their English language skill (linguists/nonlinguists). During reading brain bioelectrical activity of cortex has been registered from 12 electrodes in alfa-, beta-, theta-bands. Coherent function reflecting cooperation of different cortical areas during reading collocations has been analyzed. Increase of interhemispheric and diagonal connections while reading collocations in different languages in the group of students with low knowledge of foreign language testifies of importance of functional cooperation between the hemispheres. It has been found out that brain bioelectrical activity of students with good foreign language knowledge during reading of all collocation types in Russian and English is characterized by economization of nervous substrate resources compared to nonlinguists. Selective activation of certain cortical areas has also been observed (depending on the grammatical construction type) in nonlinguists group that is probably related to special decoding system which processes presented stimuli. Reading Russian paradigmatic constructions by nonlinguists entailed increase between left cortical areas, reading of English syntagmatic collocations--between right ones.

  9. Velocity and Object Detection Using Quaternion Wavelets

    SciTech Connect

    Traversoni, Leonardo; Xu Yi

    2007-09-06

    DStarting from stereoscopic films we detect corresponding objects in both and stablish an epipolar geometry as well as corresponding moving objects are detected and its movement described all using quaternion wavelets and quaternion phase space decomposition.

  10. Wavelet analysis for characterizing human electroencephalogram signals

    NASA Astrophysics Data System (ADS)

    Li, Bai-Lian; Wu, Hsin-i.

    1995-04-01

    Wavelet analysis is a recently developed mathematical theory and computational method for decomposing a nonstationary signal into components that have good localization properties both in time and frequency domains and hierarchical structures. Wavelet transform provides local information and multiresolution decomposition on a signal that cannot be obtained using traditional methods such as Fourier transforms and distribution-based statistical methods. Hence the change in complex biological signals can be detected. We use wavelet analysis as an innovative method for identifying and characterizing multiscale electroencephalogram signals in this paper. We develop a wavelet-based stationary phase transition method to extract instantaneous frequencies of the signal that vary in time. The results under different clinical situations show that the brian triggers small bursts of either low or high frequency immediately prior to changing on the global scale to that behavior. This information could be used as a diagnostic for detecting the onset of an epileptic seizure.

  11. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  12. Applications of a fast continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Dress, William B.

    1997-04-01

    A fast, continuous, wavelet transform, justified by appealing to Shannon's sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and from the standard treatment of the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon's sampling theorem lets us view the Fourier transform of the data set as representing the continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time-domain sampling of the signal under analysis. Although more computationally costly and not represented by an orthogonal basis, the inherent flexibility and shift invariance of the frequency-space wavelets are advantageous for certain applications. The method has been applied to forensic audio reconstruction, speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants' heart beats. Audio reconstruction is aided by selection of desired regions in the 2D representation of the magnitude of the transformed signals. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass- spring system by an occupant's beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, different features may be extracted from voice

  13. Wavelet neural networks for stock trading

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxing; Fataliyev, Kamaladdin; Wang, Lipo

    2013-05-01

    This paper explores the application of a wavelet neural network (WNN), whose hidden layer is comprised of neurons with adjustable wavelets as activation functions, to stock prediction. We discuss some basic rationales behind technical analysis, and based on which, inputs of the prediction system are carefully selected. This system is tested on Istanbul Stock Exchange National 100 Index and compared with traditional neural networks. The results show that the WNN can achieve very good prediction accuracy.

  14. The Continuous wavelet in airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Liang, X.; Liu, L.

    2013-12-01

    Airborne gravimetry is an efficient method to recover medium and high frequency band of earth gravity over any region, especially inaccessible areas, which can measure gravity data with high accuracy,high resolution and broad range in a rapidly and economical way, and It will play an important role for geoid and geophysical exploration. Filtering methods for reducing high-frequency errors is critical to the success of airborne gravimetry due to Aircraft acceleration determination based on GPS.Tradiontal filters used in airborne gravimetry are FIR,IIR filer and so on. This study recommends an improved continuous wavelet to process airborne gravity data. Here we focus on how to construct the continuous wavelet filters and show their working principle. Particularly the technical parameters (window width parameter and scale parameter) of the filters are tested. Then the raw airborne gravity data from the first Chinese airborne gravimetry campaign are filtered using FIR-low pass filter and continuous wavelet filters to remove the noise. The comparison to reference data is performed to determinate external accuracy, which shows that continuous wavelet filters applied to airborne gravity in this thesis have good performances. The advantages of the continuous wavelet filters over digital filters are also introduced. The effectiveness of the continuous wavelet filters for airborne gravimetry is demonstrated through real data computation.

  15. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  16. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  17. Microarray image enhancement by denoising using stationary wavelet transform.

    PubMed

    Wang, X H; Istepanian, Robert S H; Song, Yong Hua

    2003-12-01

    Microarray imaging is considered an important tool for large scale analysis of gene expression. The accuracy of the gene expression depends on the experiment itself and further image processing. It's well known that the noises introduced during the experiment will greatly affect the accuracy of the gene expression. How to eliminate the effect of the noise constitutes a challenging problem in microarray analysis. Traditionally, statistical methods are used to estimate the noises while the microarray images are being processed. In this paper, we present a new approach to deal with the noise inherent in the microarray image processing procedure. That is, to denoise the image noises before further image processing using stationary wavelet transform (SWT). The time invariant characteristic of SWT is particularly useful in image denoising. The testing result on sample microarray images has shown an enhanced image quality. The results also show that it has a superior performance than conventional discrete wavelet transform and widely used adaptive Wiener filter in this procedure.

  18. Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform

    PubMed Central

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2016-01-01

    Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques. PMID:27043570

  19. Wavelet transforms for electroencephalographic spike and seizure detection

    NASA Astrophysics Data System (ADS)

    Schiff, Steven J.; Milton, John G.

    1993-11-01

    The application of wavelet transforms (WT) to experimental data from the nervous system has been hindered by the lack of a straightforward method to handle noise. A noise reduction technique, developed recently for use in wavelet cluster analysis in cosmology and astronomy, is here adapted for electroencephalographic (EEG) time-series data. Noise is filtered using control surrogate data sets generated from randomized aspects of the original time-series. In this study, WT were applied to EEG data from human patients undergoing brain mapping with implanted subdural electrodes for the localization of epileptic seizure foci. EEG data in 1D were analyzed from individual electrodes, and 2D data from electrode grids. These techniques are a powerful means to identify epileptic spikes in such data, and offer a method to identity the onset and spatial extent of epileptic seizure foci. The method is readily applied to the detection of structure in stationary and non-stationary time-series from a variety of physical systems.

  20. Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform.

    PubMed

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2016-01-01

    Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques. PMID:27043570

  1. Multiparameter radar analysis using wavelets

    NASA Astrophysics Data System (ADS)

    Tawfik, Ben Bella Sayed

    Multiparameter radars have been used in the interpretation of many meteorological phenomena. Rainfall estimates can be obtained from multiparameter radar measurements. Studying and analyzing spatial variability of different rainfall algorithms, namely R(ZH), the algorithm based on reflectivity, R(ZH, ZDR), the algorithm based on reflectivity and differential reflectivity, R(KDP), the algorithm based on specific differential phase, and R(KDP, Z DR), the algorithm based on specific differential phase and differential reflectivity, are important for radar applications. The data used in this research were collected using CSU-CHILL, CP-2, and S-POL radars. In this research multiple objectives are addressed using wavelet analysis namely, (1)space time variability of various rainfall algorithms, (2)separation of convective and stratiform storms based on reflectivity measurements, (3)and detection of features such as bright bands. The bright band is a multiscale edge detection problem. In this research, the technique of multiscale edge detection is applied on the radar data collected using CP-2 radar on August 23, 1991 to detect the melting layer. In the analysis of space/time variability of rainfall algorithms, wavelet variance introduces an idea about the statistics of the radar field. In addition, multiresolution analysis of different rainfall estimates based on four algorithms, namely R(ZH), R( ZH, ZDR), R(K DP), and R(KDP, Z DR), are analyzed. The flood data of July 29, 1997 collected by CSU-CHILL radar were used for this analysis. Another set of S-POL radar data collected on May 2, 1997 at Wichita, Kansas were used as well. At each level of approximation, the detail and the approximation components are analyzed. Based on this analysis, the rainfall algorithms can be judged. From this analysis, an important result was obtained. The Z-R algorithms that are widely used do not show the full spatial variability of rainfall. In addition another intuitively obvious result

  2. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    SciTech Connect

    Avila, Gustavo Carrington, Tucker

    2015-12-07

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  3. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  4. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  5. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    NASA Astrophysics Data System (ADS)

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  6. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2015-12-01

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates. PMID:26646870

  7. The convergence problem of collocation solutions in the framework of the stochastic interpretation

    NASA Astrophysics Data System (ADS)

    Sansò, F.; Venuti, G.

    2011-01-01

    The problem of the convergence of the collocation solution to the true gravity field was defined long ago (Tscherning in Boll Geod Sci Affini 39:221-252, 1978) and some results were derived, in particular by Krarup (Boll Geod Sci Affini 40:225-240, 1981). The problem is taken up again in the context of the stochastic interpretation of collocation theory and some new results are derived, showing that, when the potential T can be really continued down to a Bjerhammar sphere, we have a quite general convergence property in the noiseless case. When noise is present in data, still reasonable convergence results hold true. "Democrito che 'l mondo a caso pone" "Democritus who made the world stochastic" Dante Alighieri, La Divina Commedia, Inferno, IV - 136

  8. A space-time spectral collocation algorithm for the variable order fractional wave equation.

    PubMed

    Bhrawy, A H; Doha, E H; Alzaidy, J F; Abdelkawy, M A

    2016-01-01

    The variable order wave equation plays a major role in acoustics, electromagnetics, and fluid dynamics. In this paper, we consider the space-time variable order fractional wave equation with variable coefficients. We propose an effective numerical method for solving the aforementioned problem in a bounded domain. The shifted Jacobi polynomials are used as basis functions, and the variable-order fractional derivative is described in the Caputo sense. The proposed method is a combination of shifted Jacobi-Gauss-Lobatto collocation scheme for the spatial discretization and the shifted Jacobi-Gauss-Radau collocation scheme for temporal discretization. The aforementioned problem is then reduced to a problem consists of a system of easily solvable algebraic equations. Finally, numerical examples are presented to show the effectiveness of the proposed numerical method. PMID:27536504

  9. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  10. Raman lidar profiling of atmospheric water vapor: Simultaneous measurements with two collocated systems

    NASA Technical Reports Server (NTRS)

    Goldsmith, J. E. M.; Bisson, Scott E.; Ferrare, Richard A.; Evans, Keith D.; Whiteman, David N.; Melfi, S. H.

    1994-01-01

    Raman lidar is a leading candidate for providing the detailed space- and time-resolved measurements of water vapor needed by a variety of atmospheric studies. Simultaneous measurements of atmospheric water vapor are described using two collocated Raman lidar systems. These lidar systems, developed at the NASA/Goddard Space Flight Center and Sandia National Laboratories, acquired approximately 12 hours of simultaneous water vapor data during three nights in November 1992 while the systems were collocated at the Goddard Space Flight Center. Although these lidar systems differ substantially in their design, measured water vapor profiles agreeed within 0.15 g/kg between altitudes of 1 and 5 km. Comparisons with coincident radiosondes showed all instruments agreed within 0.2 g/kg in this same altitude range. Both lidars also clearly showed the advection of water vapor in the middle troposphere and the pronounced increase in water vapor in the nocturnal boundary layer that occurred during one night.

  11. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  12. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    SciTech Connect

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  13. Quadratic spline collocation and parareal deferred correction method for parabolic PDEs

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, Yan; Li, Rongjian

    2016-06-01

    In this paper, we consider a linear parabolic PDE, and use optimal quadratic spline collocation (QSC) methods for the space discretization, proceed the parareal technique on the time domain. Meanwhile, deferred correction technique is used to improve the accuracy during the iterations. The error estimation is presented and the stability is analyzed. Numerical experiments, which is carried out on a parallel computer with 40 CPUs, are attached to exhibit the effectiveness of the hybrid algorithm.

  14. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  15. Wavelet transforms as solutions of partial differential equations

    SciTech Connect

    Zweig, G.

    1997-10-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuous wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.

  16. Polyvinylidene fluoride film sensors in collocated feedback structural control: application for suppressing impact-induced disturbances.

    PubMed

    Ma, Chien-Ching; Chuang, Kuo-Chih; Pan, Shan-Ying

    2011-12-01

    Polyvinylidene fluoride (PVDF) films are light, flexible, and have high piezoelectricity. Because of these advantages, they have been widely used as sensors in applications such as underwater investigation, nondestructive damage detection, robotics, and active vibration suppression. PVDF sensors are especially preferred over conventional strain gauges in active vibration control because the PVDF sensors are easy to cut into different sizes or shapes as piezoelectric actuators and they can then be placed as collocated pairs. In this work, to focus on demonstrating the dynamic sensing performance of the PVDF film sensor, we revisit the active vibration control problem of a cantilever beam using a collocated lead zirconate titanate (PZT) actuator/PVDF film sensor pair. Before applying active vibration control, the measurement characteristics of the PVDF film sensor are studied by simultaneous comparison with a strain gauge. The loading effect of the piezoelectric actuator on the cantilever beam is also investigated in this paper. Finally, four simple, robust active vibration controllers are employed with the collocated PZT/PVDF pair to suppress vibration of the cantilever beam subjected to impact loadings. The four controllers are the velocity feedback controller, the integral resonant controller (IRC), the resonant controller, and the positive position feedback (PPF) controller. Suppression of impact disturbances is especially suitable for the purpose of demonstrating the dynamic sensing performance of the PVDF sensor. The experimental results also provide suggestions for choosing between the previously mentioned controllers, which have been proven to be effective in suppressing impact-induced vibrations.

  17. Stable coupling between vector and scalar variables for the IDO scheme on collocated grids

    NASA Astrophysics Data System (ADS)

    Imai, Yohsuke; Aoki, Takayuki

    2006-06-01

    The Interpolated Differential Operator (IDO) scheme on collocated grids provides fourth-order discretizations for all the terms of the fluid flow equations. However, computations of fluid flows on collocated grids are not guaranteed to produce accurate solutions because of the poor coupling between velocity vector and scalar variables. A stable coupling method for the IDO scheme on collocated grids is proposed, where a new representation of first-order derivatives is adopted. It is important in deriving the representation to refer to the variables at neighboring grid points, keeping fourth-order truncation error. It is clear that accuracy and stability are drastically improved for shallow water equations in comparison with the conventional IDO scheme. The effects of the stable coupling are confirmed in incompressible flow calculations for DNS of turbulence and a driven cavity problem. The introduction of a rational function into the proposed method makes it possible to calculate shock waves with the initial conditions of extreme density and pressure jumps.

  18. Spectral optical layer properties of cirrus from collocated airborne measurements - a feasibility study

    NASA Astrophysics Data System (ADS)

    Finger, F.; Werner, F.; Klingebiel, M.; Ehrlich, A.; Jäkel, E.; Voigt, M.; Borrmann, S.; Spichtinger, P.; Wendisch, M.

    2015-07-01

    Spectral optical layer properties of cirrus are derived from simultaneous and vertically collocated measurements of spectral upward and downward solar irradiance above and below the cloud layer and concurrent in situ microphysical sampling. From the irradiance data spectral transmissivity, absorptivity, reflectivity, and cloud top albedo of the observed cirrus layer are obtained. At the same time microphysical properties of the cirrus were sampled. The close collocation of the radiative and microphysical measurements, above, beneath and inside the cirrus, is obtained by using a research aircraft (Learjet 35A) in tandem with a towed platform called AIRTOSS (AIRcraft TOwed Sensor Shuttle). AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected in two field campaigns above the North and Baltic Sea in spring and late summer 2013. Exemplary results from one measuring flight are discussed also to illustrate the benefits of collocated sampling. Based on the measured cirrus microphysical properties, radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness τ on cirrus optical layer properties. The effects of clouds beneath the cirrus are evaluated in addition. They cause differences in the layer properties of the cirrus by a factor of 2 to 3, and for cirrus radiative forcing by up to a factor of 4. If low-level clouds below cirrus are not considered the solar cooling due to the cirrus is significantly overestimated.

  19. A novel stochastic collocation method for uncertainty propagation in complex mechanical systems

    NASA Astrophysics Data System (ADS)

    Qi, WuChao; Tian, SuMei; Qiu, ZhiPing

    2015-02-01

    This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.

  20. Polyvinylidene fluoride film sensors in collocated feedback structural control: application for suppressing impact-induced disturbances.

    PubMed

    Ma, Chien-Ching; Chuang, Kuo-Chih; Pan, Shan-Ying

    2011-12-01

    Polyvinylidene fluoride (PVDF) films are light, flexible, and have high piezoelectricity. Because of these advantages, they have been widely used as sensors in applications such as underwater investigation, nondestructive damage detection, robotics, and active vibration suppression. PVDF sensors are especially preferred over conventional strain gauges in active vibration control because the PVDF sensors are easy to cut into different sizes or shapes as piezoelectric actuators and they can then be placed as collocated pairs. In this work, to focus on demonstrating the dynamic sensing performance of the PVDF film sensor, we revisit the active vibration control problem of a cantilever beam using a collocated lead zirconate titanate (PZT) actuator/PVDF film sensor pair. Before applying active vibration control, the measurement characteristics of the PVDF film sensor are studied by simultaneous comparison with a strain gauge. The loading effect of the piezoelectric actuator on the cantilever beam is also investigated in this paper. Finally, four simple, robust active vibration controllers are employed with the collocated PZT/PVDF pair to suppress vibration of the cantilever beam subjected to impact loadings. The four controllers are the velocity feedback controller, the integral resonant controller (IRC), the resonant controller, and the positive position feedback (PPF) controller. Suppression of impact disturbances is especially suitable for the purpose of demonstrating the dynamic sensing performance of the PVDF sensor. The experimental results also provide suggestions for choosing between the previously mentioned controllers, which have been proven to be effective in suppressing impact-induced vibrations. PMID:23443690

  1. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    NASA Technical Reports Server (NTRS)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  2. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  3. Wavelet analysis deformation monitoring data of high-speed railway bridge

    NASA Astrophysics Data System (ADS)

    Tang, ShiHua; Huang, Qing; Zhou, Conglin; Xu, HongWei; Liu, YinTao; Li, FeiDa

    2015-12-01

    Deformation monitoring data of high-speed railway bridges will inevitably be affected because of noise pollution, A deformation monitoring point of high-speed railway bridge was measurd by using sokkia SDL30 electronic level for a long time,which got a large number of deformation monitoring data. Based on the characteristics of the deformation monitoring data of high-speed railway bridge, which contain lots of noise. Based on the MATLAB software platform, 120 groups of deformation monitoring data were applied to analysis of wavelet denoising.sym6,db6 wavelet basis function were selected to analyze and remove the noise.The original signal was broken into three layers wavelet,which contain high frequency coefficients and low frequency coefficients.However, high frequency coefficient have plenty of noise.Adaptive method of soft and hard threshold were used to handle in the high frequency coefficient.Then,high frequency coefficient that was removed much of noise combined with low frequency coefficient to reconstitute and obtain reconstruction wavelet signal.Root Mean Square Error (RMSE) and Signal-To-Noise Ratio (SNR) were regarded as evaluation index of denoising,The smaller the root mean square error and the greater signal-to-noise ratio indicate that them have a good effect in denoising. We can surely draw some conclusions in the experimental analysis:the db6 wavelet basis function has a good effect in wavelet denoising by using a adaptive soft threshold method,which root mean square error is minimum and signal-to-noise ratio is maximum.Moreover,the reconstructed image are more smooth than original signal denoising after wavelet denoising, which removed noise and useful signal are obtained in the original signal.Compared to the other three methods, this method has a good effect in denoising, which not only retain useful signal in the original signal, but aiso reach the goal of removing noise. So, it has a strong practical value in a actual deformation monitoring

  4. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470

  5. Wavelet based detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Gur, Berke M.; Niezrecki, Christopher

    2005-04-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.

  6. Wavelet formulation of the polarizable continuum model. II. Use of piecewise bilinear boundary elements.

    PubMed

    Bugeanu, Monica; Di Remigio, Roberto; Mozgawa, Krzysztof; Reine, Simen Sommerfelt; Harbrecht, Helmut; Frediani, Luca

    2015-12-21

    The simplicity of dielectric continuum models has made them a standard tool in almost any Quantum Chemistry (QC) package. Despite being intuitive from a physical point of view, the actual electrostatic problem at the cavity boundary is challenging: the underlying boundary integral equations depend on singular, long-range operators. The parametrization of the cavity boundary should be molecular-shaped, smooth and differentiable. Even the most advanced implementations, based on the integral equation formulation (IEF) of the polarizable continuum model (PCM), generally lead to working equations which do not guarantee convergence to the exact solution and/or might become numerically unstable in the limit of large refinement of the molecular cavity (small tesserae). This is because they generally make use of a surface parametrization with cusps (interlocking spheres) and employ collocation methods for the discretization (point charges). Wavelets on a smooth cavity are an attractive alternative to consider: for the operators involved, they lead to highly sparse matrices and precise error control. Moreover, by making use of a bilinear basis for the representation of operators and functions on the cavity boundary, all equations can be differentiated to enable the computation of geometrical derivatives. In this contribution, we present our implementation of the IEFPCM with bilinear wavelets on a smooth cavity boundary. The implementation has been carried out in our module PCMSolver and interfaced with LSDalton, demonstrating the accuracy of the method both for the electrostatic solvation energy and for linear response properties. In addition, the implementation in a module makes our framework readily available to any QC software with minimal effort. PMID:26256401

  7. [Wavelet entropy analysis of spontaneous EEG signals in Alzheimer's disease].

    PubMed

    Zhang, Meiyun; Zhang, Benshu; Chen, Ying

    2014-08-01

    Wavelet entropy is a quantitative index to describe the complexity of signals. Continuous wavelet transform method was employed to analyze the spontaneous electroencephalogram (EEG) signals of mild, moderate and severe Alzheimer's disease (AD) patients and normal elderly control people in this study. Wavelet power spectrums of EEG signals were calculated based on wavelet coefficients. Wavelet entropies of mild, moderate and severe AD patients were compared with those of normal controls. The correlation analysis between wavelet entropy and MMSE score was carried out. There existed significant difference on wavelet entropy among mild, moderate, severe AD patients and normal controls (P<0.01). Group comparisons showed that wavelet entropy for mild, moderate, severe AD patients was significantly lower than that for normal controls, which was related to the narrow distribution of their wavelet power spectrums. The statistical difference was significant (P<0.05). Further studies showed that the wavelet entropy of EEG and the MMSE score were significantly correlated (r= 0. 601-0. 799, P<0.01). Wavelet entropy is a quantitative indicator describing the complexity of EEG signals. Wavelet entropy is likely to be an electrophysiological index for AD diagnosis and severity assessment.

  8. Wavelet-based learning vector quantization for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Chan, Lipchen A.; Nasrabadi, Nasser M.; Mirelli, Vincent

    1996-06-01

    An automatic target recognition classifier is constructed that uses a set of dedicated vector quantizers (VQs). The background pixels in each input image are properly clipped out by a set of aspect windows. The extracted target area for each aspect window is then enlarged to a fixed size, after which a wavelet decomposition splits the enlarged extraction into several subbands. A dedicated VQ codebook is generated for each subband of a particular target class at a specific range of aspects. Thus, each codebook consists of a set of feature templates that are iteratively adapted to represent a particular subband of a given target class at a specific range of aspects. These templates are then further trained by a modified learning vector quantization (LVQ) algorithm that enhances their discriminatory characteristics. A recognition rate of 69.0 percent is achieved on a highly cluttered test set.

  9. Simultaneous registration and segmentation of images in wavelet domain

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki

    1999-10-01

    A novel method for simultaneous registration and segmentation is developed. The method is designed to register two similar images while a region with significant difference is adaptively segmented. This is achieved by minimization of a non-linear functional that models the statistical properties of the subtraction of the two images. Minimization is performed in the wavelet domain by a coarse- to-fine approach to yield a mapping that yields the registration and the boundary that yields the segmentation. The new method was applied to the registration of the left and the right lung regions in chest radiographs for extraction of lung nodules while the normal anatomic structures such as ribs are removed. A preliminary result shows that our method is very effective in reducing the number of false detections obtained with our computer-aided diagnosis scheme for detection of lung nodules in chest radiographs.

  10. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  11. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  12. Component identification of nonstationary signals using wavelets

    SciTech Connect

    Otaduy, P.J.; Georgevich, V. )

    1993-01-01

    Fourier analysis is based on the decomposition of a signal into a linear combination of integral dilations of the base function e[sup ix],i.e., of sinusoidal waves. The larger the dilation the higher the frequency of the sinusoidal component. Each frequency component is of constant magnitude along the signal length. Localized features are averaged over the signal's length; thus, time localization is absent. Wavelet analysis is based on the decomposition of a signal into a linear combination of binary dilations and dyadic translations of a base function with compact support, i.e., a basic wavelet. A basic wavelet function can be, with basic restrictions, any function suitable to be a window in both the time.

  13. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  14. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  15. Characterization and simulation of gunfire with wavelets

    SciTech Connect

    Smallwood, D.O.

    1998-09-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The response of a structure to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The methods all used some form of the discrete fourier transform. The current paper will explore a simpler method to describe the nonstationary random process in terms of a wavelet transform. As was done previously, the gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. The wavelet transform is performed on each of these records. The mean and standard deviation of the resulting wavelet coefficients describe the composite characteristics of the entire waveform. It is shown that the distribution of the wavelet coefficients is approximately Gaussian with a nonzero mean and that the standard deviation of the coefficients at different times and levels are approximately independent. The gunfire is simulated by generating realizations of records of a single-round firing by computing the inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously discussed gunfire record. The individual realizations are then assembled into a realization of a time history of many rounds firing. A second-order correction of the probability density function (pdf) is accomplished with a zero memory nonlinear (ZMNL) function. The method is straightforward, easy to implement, and produces a simulated record very much like the original measured gunfire record.

  16. Analysis of wavelet technology for NASA applications

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.

  17. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  18. Wavelet analysis applied to the IRAS cirrus

    NASA Technical Reports Server (NTRS)

    Langer, William D.; Wilson, Robert W.; Anderson, Charles H.

    1994-01-01

    The structure of infrared cirrus clouds is analyzed with Laplacian pyramid transforms, a form of non-orthogonal wavelets. Pyramid and wavelet transforms provide a means to decompose images into their spatial frequency components such that all spatial scales are treated in an equivalent manner. The multiscale transform analysis is applied to IRAS 100 micrometer maps of cirrus emission in the north Galactic pole region to extract features on different scales. In the maps we identify filaments, fragments and clumps by separating all connected regions. These structures are analyzed with respect to their Hausdorff dimension for evidence of the scaling relationships in the cirrus clouds.

  19. Wavelet-based detection of transients in biological signals

    NASA Astrophysics Data System (ADS)

    Mzaik, Tahsin; Jagadeesh, Jogikal M.

    1994-10-01

    This paper presents two multiresolution algorithms for detection and separation of mixed signals using the wavelet transform. The first algorithm allows one to design a mother wavelet and its associated wavelet grid that guarantees the separation of signal components if information about the expected minimum signal time and frequency separation of the individual components is known. The second algorithm expands this idea to design two mother wavelets which are then combined to achieve the required separation otherwise impossible with a single wavelet. Potential applications include many biological signals such as ECG, EKG, and retinal signals.

  20. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity.

  1. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  2. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  3. Wilkinson Microwave Anisotropy Probe 7-yr constraints on f_NL with a fast wavelet estimator

    NASA Astrophysics Data System (ADS)

    Casaponsa, B.; Barreiro, R. B.; Curto, A.; Martínez-González, E.; Vielva; P.

    2011-11-01

    A new method to constrain the local non-linear coupling parameter f_NL based on a fast wavelet decomposition is presented. Using a multiresolution wavelet adapted to the HEALPix pixelization, we have developed a method that is ˜ 10^2 times faster than previous estimators based on isotropic wavelets and ˜ 10^3 faster than the KSW bispectrum estimator, at the resolution of the Wilkinson Microwave Anisotropy Probe (WMAP) data. The method has been applied to the WMAP 7-yr V+W combined map, imposing constraints on f_NL of -69 < f_NL < 65 at the 95 per cent CL. This result has been obtained after correcting for the contribution of the residual point sources which has been estimated to be Δ f_NL =7 ± 6. In addition, a Gaussianity analysis of the data has been carried out using the third order moments of the wavelet coefficients, finding consistency with Gaussianity. Although the constrainsts imposed on f_NL are less stringent than those found with optimal estimators, we believe that a very fast method, as the one proposed in this work, can be very useful, especially bearing in mind the large amount of data that will be provided by future experiments, such as the Planck satellite. Moreover, the localisation of wavelets allows one to carry out analyses on different regions of the sky. As an application, we have separately analysed the two hemispheres defined by the dipolar modulation proposed by Hoftuft et al. (2009, ApJ, 699, 985). We do not find any significant asymmetry regarding the estimated value of f_NL in those hemispheres.

  4. Real-time nondestructive structural health monitoring using support vector machines and wavelets

    NASA Astrophysics Data System (ADS)

    Bulut, Ahmet; Singh, Ambuj K.; Shin, Peter; Fountain, Tony; Jasso, Hector; Yan, Linjun; Elgamal, Ahmed

    2005-05-01

    We present an alternative to visual inspection for detecting damage to civil infrastructure. We describe a real-time decision support system for nondestructive health monitoring. The system is instrumented by an integrated network of wireless sensors mounted on civil infrastructures such as bridges, highways, and commercial and industrial facilities. To address scalability and power consumption issues related to sensor networks, we propose a three-tier system that uses wavelets to adaptively reduce the streaming data spatially and temporally. At the sensor level, measurement data is temporally compressed before being sent upstream to intermediate communication nodes. There, correlated data from multiple sensors is combined and sent to the operation center for further reduction and interpretation. At each level, the compression ratio can be adaptively changed via wavelets. This multi-resolution approach is useful in optimizing total resources in the system. At the operation center, Support Vector Machines (SVMs) are used to detect the location of potential damage from the reduced data. We demonstrate that the SVM is a robust classifier in the presence of noise and that wavelet-based compression gracefully degrades its classification accuracy. We validate the effectiveness of our approach using a finite element model of the Humboldt Bay Bridge. We envision that our approach will prove novel and useful in the design of scalable nondestructive health monitoring systems.

  5. Understanding wavelet analysis and filters for engineering applications

    NASA Astrophysics Data System (ADS)

    Parameswariah, Chethan Bangalore

    Wavelets are signal-processing tools that have been of interest due to their characteristics and properties. Clear understanding of wavelets and their properties are a key to successful applications. Many theoretical and application-oriented papers have been written. Yet the choice of a right wavelet for a given application is an ongoing quest that has not been satisfactorily answered. This research has successfully identified certain issues, and an effort has been made to provide an understanding of wavelets by studying the wavelet filters in terms of their pole-zero and magnitude-phase characteristics. The magnitude characteristics of these filters have flat responses in both the pass band and stop band. The phase characteristics are almost linear. It is interesting to observe that some wavelets have the exact same magnitude characteristics but their phase responses vary in the linear slopes. An application of wavelets for fast detection of the fault current in a transformer and distinguishing from the inrush current clearly shows the advantages of the lower slope and fewer coefficients---Daubechies wavelet D4 over D20. This research has been published in the IEEE transactions on Power systems and is also proposed as an innovative method for protective relaying techniques. For detecting the frequency composition of the signal being analyzed, an understanding of the energy distribution in the output wavelet decompositions is presented for different wavelet families. The wavelets with fewer coefficients in their filters have more energy leakage into adjacent bands. The frequency bandwidth characteristics display flatness in the middle of the pass band confirming that the frequency of interest should be in the middle of the frequency band when performing a wavelet transform. Symlets exhibit good flatness with minimum ripple but the transition regions do not have sharper cut off. The number of wavelet levels and their frequency ranges are dependent on the two

  6. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  7. Information retrieval system utilizing wavelet transform

    DOEpatents

    Brewster, Mary E.; Miller, Nancy E.

    2000-01-01

    A method for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.

  8. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  9. Cosmic Ray elimination using the Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Orozco-Aguilera, M. T.; Cruz, J.; Altamirano, L.; Serrano, A.

    2009-11-01

    In this work, we present a method for the automatic cosmic ray elimination in a single CCD exposure using the Wavelet Transform. The proposed method can eliminate cosmic rays of any shape or size. With this method we can eliminate over 95% of cosmic rays in a spectral image.

  10. Climate wavelet spectrum estimation under chronology uncertainties

    NASA Astrophysics Data System (ADS)

    Lenoir, G.; Crucifix, M.

    2012-04-01

    Several approaches to estimate the chronology of palaeoclimate records exist in the literature: simple interpolation between the tie points, orbital tuning, alignment on other data... These techniques generate a single estimate of the chronology. More recently, statistical generators of chronologies have appeared (e.g. OXCAL, BCHRON) allowing the construction of thousands of chronologies given the tie points and their uncertainties. These techniques are based on advanced statistical methods. They allow one to take into account the uncertainty of the timing of each climatic event recorded into the core. On the other hand, when interpreting the data, scientists often rely on time series analysis, and especially on spectral analysis. Given that paleo-data are composed of a large spectrum of frequencies, are non-stationary and are highly noisy, the continuous wavelet transform turns out to be a suitable tool to analyse them. The wavelet periodogram, in particular, is helpful to interpret visually the time-frequency behaviour of the data. Here, we combine statistical methods to generate chronologies with the power of continuous wavelet transform. Some interesting applications then come up: comparison of time-frequency patterns between two proxies (extracted from different cores), between a proxy and a statistical dynamical model, and statistical estimation of phase-lag between two filtered signals. All these applications consider explicitly the uncertainty in the chronology. The poster presents mathematical developments on the wavelet spectrum estimation under chronology uncertainties as well as some applications to Quaternary data based on marine and ice cores.

  11. Spectral optical layer properties of cirrus from collocated airborne measurements and simulations

    NASA Astrophysics Data System (ADS)

    Finger, Fanny; Werner, Frank; Klingebiel, Marcus; Ehrlich, André; Jäkel, Evelyn; Voigt, Matthias; Borrmann, Stephan; Spichtinger, Peter; Wendisch, Manfred

    2016-06-01

    Spectral upward and downward solar irradiances from vertically collocated measurements above and below a cirrus layer are used to derive cirrus optical layer properties such as spectral transmissivity, absorptivity, reflectivity, and cloud top albedo. The radiation measurements are complemented by in situ cirrus crystal size distribution measurements and radiative transfer simulations based on the microphysical data. The close collocation of the radiative and microphysical measurements, above, beneath, and inside the cirrus, is accomplished by using a research aircraft (Learjet 35A) in tandem with the towed sensor platform AIRTOSS (AIRcraft TOwed Sensor Shuttle). AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected from two field campaigns over the North Sea and the Baltic Sea in spring and late summer 2013. One measurement flight over the North Sea proved to be exemplary, and as such the results are used to illustrate the benefits of collocated sampling. The radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness τ on cirrus spectral optical layer properties. Furthermore, the radiative effects of low-level, liquid water (warm) clouds as frequently observed beneath the cirrus are evaluated. They may cause changes in the radiative forcing of the cirrus by a factor of 2. When low-level clouds below the cirrus are not taken into account, the radiative cooling effect (caused by reflection of solar radiation) due to the cirrus in the solar (shortwave) spectral range is significantly overestimated.

  12. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  13. The radial basis function finite collocation approach for capturing sharp fronts in time dependent advection problems

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Power, H.

    2015-10-01

    We propose a node-based local meshless method for advective transport problems that is capable of operating on centrally defined stencils and is suitable for shock-capturing purposes. High spatial convergence rates can be achieved; in excess of eighth-order in some cases. Strongly-varying smooth profiles may be captured at infinite Péclet number without instability, and for discontinuous profiles the solution exhibits neutrally stable oscillations that can be damped by introducing a small artificial diffusion parameter, allowing a good approximation to the shock-front to be maintained for long travel times without introducing spurious oscillations. The proposed method is based on local collocation with radial basis functions (RBFs) in a "finite collocation" configuration. In this approach the PDE governing and boundary equations are enforced directly within the local RBF collocation systems, rather than being reconstructed from fixed interpolating functions as is typical of finite difference, finite volume or finite element methods. In this way the interpolating basis functions naturally incorporate information from the governing PDE, including the strength and direction of the convective velocity field. By using these PDE-enhanced interpolating functions an "implicit upwinding" effect is achieved, whereby the flow of information naturally respects the specifics of the local convective field. This implicit upwinding effect allows high-convergence solutions to be obtained on centred stencils for advection problems. The method is formulated using a high-convergence implicit timestepping algorithm based on Richardson extrapolation. The spatial and temporal convergence of the proposed approach is demonstrated using smooth functions with large gradients. The capture of discontinuities is then investigated, showing how the addition of a dynamic stabilisation parameter can damp the neutrally stable oscillations with limited smearing of the shock front.

  14. A time domain collocation method for studying the aeroelasticity of a two dimensional airfoil with a structural nonlinearity

    NASA Astrophysics Data System (ADS)

    Dai, Honghua; Yue, Xiaokui; Yuan, Jianping; Atluri, Satya N.

    2014-08-01

    A time domain collocation method for the study of the motion of a two dimensional aeroelastic airfoil with a cubic structural nonlinearity is presented. This method first transforms the governing ordinary differential equations into a system of nonlinear algebraic equations (NAEs), which are then solved by a Jacobian-inverse-free NAE solver. Using the aeroelastic airfoil as a prototypical system, the time domain collocation method is shown here to be mathematically equivalent to the well known high dimensional harmonic balance method. Based on the fact that the high dimensional harmonic balance method is essentially a collocation method in disguise, we clearly explain the aliasing phenomenon of the high dimensional harmonic balance method. On the other hand, the conventional harmonic balance method is also applied. Previous studies show that the harmonic balance method does not produce aliasing in the framework of solving the Duffing equation. However, we demonstrate that a mathematical type of aliasing occurs in the harmonic balance method for the present self-excited nonlinear dynamical system. Besides, a parameter marching procedure is used to sufficiently eliminate the effects of aliasing pertaining to the time domain collocation method. Moreover, the accuracy of the time domain collocation method is compared with the harmonic balance method.

  15. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  16. A meshfree local RBF collocation method for anti-plane transverse elastic wave propagation analysis in 2D phononic crystals

    NASA Astrophysics Data System (ADS)

    Zheng, Hui; Zhang, Chuanzeng; Wang, Yuesheng; Sladek, Jan; Sladek, Vladimir

    2016-01-01

    In this paper, a meshfree or meshless local radial basis function (RBF) collocation method is proposed to calculate the band structures of two-dimensional (2D) anti-plane transverse elastic waves in phononic crystals. Three new techniques are developed for calculating the normal derivative of the field quantity required by the treatment of the boundary conditions, which improve the stability of the local RBF collocation method significantly. The general form of the local RBF collocation method for a unit-cell with periodic boundary conditions is proposed, where the continuity conditions on the interface between the matrix and the scatterer are taken into account. The band structures or dispersion relations can be obtained by solving the eigenvalue problem and sweeping the boundary of the irreducible first Brillouin zone. The proposed local RBF collocation method is verified by using the corresponding results obtained with the finite element method. For different acoustic impedance ratios, various scatterer shapes, scatterer arrangements (lattice forms) and material properties, numerical examples are presented and discussed to show the performance and the efficiency of the developed local RBF collocation method compared to the FEM for computing the band structures of 2D phononic crystals.

  17. Application of collocated GPS and seismic sensors to earthquake monitoring and early warning.

    PubMed

    Li, Xingxing; Zhang, Xiaohong; Guo, Bofeng

    2013-10-24

    We explore the use of collocated GPS and seismic sensors for earthquake monitoring and early warning. The GPS and seismic data collected during the 2011 Tohoku-Oki (Japan) and the 2010 El Mayor-Cucapah (Mexico) earthquakes are analyzed by using a tightly-coupled integration. The performance of the integrated results is validated by both time and frequency domain analysis. We detect the P-wave arrival and observe small-scale features of the movement from the integrated results and locate the epicenter. Meanwhile, permanent offsets are extracted from the integrated displacements highly accurately and used for reliable fault slip inversion and magnitude estimation.

  18. A Fourier collocation time domain method for numerically solving Maxwell's equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1991-01-01

    A new method for solving Maxwell's equations in the time domain for arbitrary values of permittivity, conductivity, and permeability is presented. Spatial derivatives are found by a Fourier transform method and time integration is performed using a second order, semi-implicit procedure. Electric and magnetic fields are collocated on the same grid points, rather than on interleaved points, as in the Finite Difference Time Domain (FDTD) method. Numerical results are presented for the propagation of a 2-D Transverse Electromagnetic (TEM) mode out of a parallel plate waveguide and into a dielectric and conducting medium.

  19. Solute transport via alternating-direction collocation using the modified method of characteristics

    NASA Astrophysics Data System (ADS)

    Allen, Myron B.; Khosravani, Azar

    We present a finite-element collocation method for modeling underground solute transport in two space dimensions when advection is dominant. The scheme uses a modified method of characteristics to approximate advective terms, thereby reducing the temporal truncation error and allowing accurate transport of solute by the velocity field. In conjunction with this approach, we employ an alternating-direction algorithm to yield a highly parallelizable algorithm for solving two-dimensional problems as sequences of simpler problems having one-dimensional matrix structure.

  20. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  1. Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs

    PubMed Central

    Javadi, H. H. S.; Navidi, H. R.

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  2. Numerical algorithm based on Haar-Sinc collocation method for solving the hyperbolic PDEs.

    PubMed

    Pirkhedri, A; Javadi, H H S; Navidi, H R

    2014-01-01

    The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295

  3. ECG Artifact Removal from Surface EMG Signal Using an Automated Method Based on Wavelet-ICA.

    PubMed

    Abbaspour, Sara; Lindén, Maria; Gholamhosseini, Hamid

    2015-01-01

    This study aims at proposing an efficient method for automated electrocardiography (ECG) artifact removal from surface electromyography (EMG) signals recorded from upper trunk muscles. Wavelet transform is applied to the simulated data set of corrupted surface EMG signals to create multidimensional signal. Afterward, independent component analysis (ICA) is used to separate ECG artifact components from the original EMG signal. Components that correspond to the ECG artifact are then identified by an automated detection algorithm and are subsequently removed using a conventional high pass filter. Finally, the results of the proposed method are compared with wavelet transform, ICA, adaptive filter and empirical mode decomposition-ICA methods. The automated artifact removal method proposed in this study successfully removes the ECG artifacts from EMG signals with a signal to noise ratio value of 9.38 while keeping the distortion of original EMG to a minimum. PMID:25980853

  4. CHARACTERIZING COMPLEXITY IN SOLAR MAGNETOGRAM DATA USING A WAVELET-BASED SEGMENTATION METHOD

    SciTech Connect

    Kestener, P.; Khalil, A.; Arneodo, A.

    2010-07-10

    The multifractal nature of solar photospheric magnetic structures is studied using the two-dimensional wavelet transform modulus maxima (WTMM) method. This relies on computing partition functions from the wavelet transform skeleton defined by the WTMM method. This skeleton provides an adaptive space-scale partition of the fractal distribution under study, from which one can extract the multifractal singularity spectrum. We describe the implementation of a multiscale image processing segmentation procedure based on the partitioning of the WT skeleton, which allows the disentangling of the information concerning the multifractal properties of active regions from the surrounding quiet-Sun field. The quiet Sun exhibits an average Hoelder exponent {approx}-0.75, with observed multifractal properties due to the supergranular structure. On the other hand, active region multifractal spectra exhibit an average Hoelder exponent {approx}0.38, similar to those found when studying experimental data from turbulent flows.

  5. When Joy Matters: The Importance of Hedonic Stimulation in Collocated Collaboration with Large-Displays

    NASA Astrophysics Data System (ADS)

    Novak, Jasminko; Schmidt, Susanne

    Hedonic aspects are increasingly considered as an important factor in user acceptance of information systems, especially for activities with high self-fulfilling value for the users. In this paper we report on the results of an experiment investigating the hedonic qualities of an interactive large-display workspace for collocated collaboration in sales-oriented travel advisory. The results show a higher hedonic stimulation quality of a touch-based large-display travel advisory workspace than that of a traditional workspace with catalogues. Together with the feedback of both customers and travel agents this suggests the adequacy of using touch-based large-displays with visual workspaces for supporting the hedonic stimulation of user experience in collocated collaboration settings. The relation of high perception of hedonic quality to positive emotional attitudes towards the use of a large-display workspace indicates that even in utilitarian activities (e.g. reaching sales goals for travel agents) hedonic aspects can play an important role. This calls for reconsidering the traditional divide of hedonic vs. utilitarian systems in current literature, to a more balanced view towards systems which provide both utilitarian and hedonic sources of value to the user.

  6. Estimated variability of National Atmospheric Deposition Program/Mercury Deposition Network measurements using collocated samplers

    USGS Publications Warehouse

    Wetherbee, G.A.; Gay, D.A.; Brunette, R.C.; Sweet, C.W.

    2007-01-01

    The National Atmospheric Deposition Program/Mercury Deposition Network (MDN) provides long-term, quality-assured records of mercury in wet deposition in the USA and Canada. Interpretation of spatial and temporal trends in the MDN data requires quantification of the variability of the MDN measurements. Variability is quantified for MDN data from collocated samplers at MDN sites in two states, one in Illinois and one in Washington. Median absolute differences in the collocated sampler data for total mercury concentration are approximately 11% of the median mercury concentration for all valid 1999-2004 MDN data. Median absolute differences are between 3.0% and 14% of the median MDN value for collector catch (sample volume) and between 6.0% and 15% of the median MDN value for mercury wet deposition. The overall measurement errors are sufficiently low to resolve between NADP/MDN measurements by ??2 ng??l-1 and ??2 ????m-2?? year-1, which are the contour intervals used to display the data on NADP isopleths maps for concentration and deposition, respectively. ?? Springer Science+Business Media B.V. 2007.

  7. Geographic analysis of the feasibility of collocating algal biomass production with wastewater treatment plants.

    PubMed

    Fortier, Marie-Odile P; Sturm, Belinda S M

    2012-10-16

    Resource demand analyses indicate that algal biodiesel production would require unsustainable amounts of freshwater and fertilizer supplies. Alternatively, municipal wastewater effluent can be used, but this restricts production of algae to areas near wastewater treatment plants (WWTPs), and to date, there has been no geospatial analysis of the feasibility of collocating large algal ponds with WWTPs. The goals of this analysis were to determine the available areas by land cover type within radial extents (REs) up to 1.5 miles from WWTPs; to determine the limiting factor for algal production using wastewater; and to investigate the potential algal biomass production at urban, near-urban, and rural WWTPs in Kansas. Over 50% and 87% of the land around urban and rural WWTPs, respectively, was found to be potentially available for algal production. The analysis highlights a trade-off between urban WWTPs, which are generally land-limited but have excess wastewater effluent, and rural WWTPs, which are generally water-limited but have 96% of the total available land. Overall, commercial-scale algae production collocated with WWTPs is feasible; 29% of the Kansas liquid fuel demand could be met with implementation of ponds within 1 mile of all WWTPs and supplementation of water and nutrients when these are limited. PMID:22970803

  8. Stable discontinuous grid implementation for collocated-grid finite-difference seismic wave modelling

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenguo; Zhang, Wei; Li, Hong; Chen, Xiaofei

    2013-03-01

    Simulating seismic waves with uniform grid in heterogeneous high-velocity contrast media requires small-grid spacing determined by the global minimal velocity, which leads to huge number of grid points and small time step. To reduce the computational cost, discontinuous grids that use a finer grid at the shallow low-velocity region and a coarser grid at high-velocity regions are needed. In this paper, we present a discontinuous grid implementation for the collocated-grid finite-difference (FD) methods to increase the efficiency of seismic wave modelling. The grid spacing ratio n could be an arbitrary integer n ≥ 2. To downsample the wavefield from the finer grid to the coarser grid, our implementation can simply take the values on the finer grid without employing a downsampling filter for grid spacing ratio n = 2 to achieve stable results for long-time simulation. For grid spacing ratio n ≥ 3, the Gaussian filter should be used as the downsampling filter to get a stable simulation. To interpolate the wavefield from the coarse grid to the finer grid, the trilinear interpolation is used. Combining the efficiency of discontinuous grid with the flexibility of collocated-grid FD method on curvilinear grids, our method can simulate large-scale high-frequency strong ground motion of real earthquake with consideration of surface topography.

  9. Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method

    NASA Astrophysics Data System (ADS)

    Habel, Branislav; Janak, Juraj

    2014-05-01

    A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.

  10. Entropy Stable Staggered Grid Spectral Collocation for the Burgers' and Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2015-01-01

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for Burgers' and the compressible Navier-Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [1, 2], extends the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to a combination of tensor product Legendre-Gauss (LG) and LGL points. The new semi-discrete operators discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality for both Burgers' and the compressible Navier-Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly to implement. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinearly stability proof for the compressible Navier-Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  11. Improving Assimilation of Microwave Radiances in Cloudy Situations with Collocated High Resolution Imager Cloud Mask

    NASA Astrophysics Data System (ADS)

    Han, H.; Li, J.; Goldberg, M.; Wang, P.; Li, Z.

    2014-12-01

    Tropical cyclones (TCs) accompanied with heavy rainfall and strong wind are high impact weather systems, often causing extensive property damage and even fatalities when landed. Better prediction of TCs can lead to substantial reduction of social and economic damage; there are growing interests in the enhanced satellite data assimilation for improving TC forecasts. Accurate cloud detection is one of the most important factors in satellite data assimilation due to the uncertainties of cloud properties and their impacts on satellite observed radiances. To enhance the accuracy of cloud detection and improve the TC forecasting, microwave measurements are collocated with high spatial resolution imager cloud mask. The collocated advanced microwave sounder measurements are assimilated for the hurricane Sandy (2012) and typhoon Haiyan (2013) forecasting using the Weather Research and Forecasting (WRF) model and the 3DVAR-based Gridpoint Statistical Interpolation (GSI) data assimilation system. Experiments will be carried out to determine a cloud cover threshold to distinguish between cloud affected and cloud unaffected footprints. The results indicate that the use of the high spatial resolution imager cloud mask can improve the accuracy of TC forecasts by eliminating cloud contaminated pixels. The methodology used in this study is applicable to advanced microwave sounders and high spatial resolution imagers, such as ATMS/VIIRS onboard NPP and JPSS, and IASI/AVHRR from Metop, for the improved TC track and intensity forecasts.

  12. Feature selection using Haar wavelet power spectrum

    PubMed Central

    Subramani, Prabakaran; Sahu, Rajendra; Verma, Shekhar

    2006-01-01

    Background Feature selection is an approach to overcome the 'curse of dimensionality' in complex researches like disease classification using microarrays. Statistical methods are utilized more in this domain. Most of them do not fit for a wide range of datasets. The transform oriented signal processing domains are not probed much when other fields like image and video processing utilize them well. Wavelets, one of such techniques, have the potential to be utilized in feature selection method. The aim of this paper is to assess the capability of Haar wavelet power spectrum in the problem of clustering and gene selection based on expression data in the context of disease classification and to propose a method based on Haar wavelet power spectrum. Results Haar wavelet power spectra of genes were analysed and it was observed to be different in different diagnostic categories. This difference in trend and magnitude of the spectrum may be utilized in gene selection. Most of the genes selected by earlier complex methods were selected by the very simple present method. Each earlier works proved only few genes are quite enough to approach the classification problem [1]. Hence the present method may be tried in conjunction with other classification methods. The technique was applied without removing the noise in data to validate the robustness of the method against the noise or outliers in the data. No special softwares or complex implementation is needed. The qualities of the genes selected by the present method were analysed through their gene expression data. Most of them were observed to be related to solve the classification issue since they were dominant in the diagnostic category of the dataset for which they were selected as features. Conclusion In the present paper, the problem of feature selection of microarray gene expression data was considered. We analyzed the wavelet power spectrum of genes and proposed a clustering and feature selection method useful for

  13. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  14. Wavelet analysis and applications to some dynamical systems

    NASA Astrophysics Data System (ADS)

    Bendjoya, Ph.; Slezak, E.

    1993-05-01

    The main properties of the wavelet transform as a new time-frequency method which is particularly well suited for detecting and localizing discontinuities and scaling behavior in signals are reviewed. Particular attention is given to first applications of the wavelet transform to dynamical systems including solution of partial differential equations, fractal and turbulence characterization, and asteroid family determination from cluster analysis. Advantages of the wavelet transform over classical analysis methods are summarized.

  15. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  16. Variability of Solar Irradiances Using Wavelet Analysis

    NASA Technical Reports Server (NTRS)

    Pesnell, William D.

    2007-01-01

    We have used wavelets to analyze the sunspot number, F10.7 (the solar irradiance at a wavelength of approx.10.7 cm), and Ap (a geomagnetic activity index). Three different wavelets are compared, showing how each selects either temporal or scale resolution. Our goal is an envelope of solar activity that better bounds the large amplitude fluctuations form solar minimum to maximum. We show how the 11-year cycle does not disappear at solar minimum, that minimum is only the other part of the solar cycle. Power in the fluctuations of solar-activity-related indices may peak during solar maximum but the solar cycle itself is always present. The Ap index has a peak after solar maximum that appears to be better correlated with the current solar cycle than with the following cycle.

  17. Wavelets for full reconfigurable ECG acquisition system

    NASA Astrophysics Data System (ADS)

    Morales, D. P.; García, A.; Castillo, E.; Meyer-Baese, U.; Palma, A. J.

    2011-06-01

    This paper presents the use of wavelet cores for a full reconfigurable electrocardiogram signal (ECG) acquisition system. The system is compound by two reconfigurable devices, a FPGA and a FPAA. The FPAA is in charge of the ECG signal acquisition, since this device is a versatile and reconfigurable analog front-end for biosignals. The FPGA is in charge of FPAA configuration, digital signal processing and information extraction such as heart beat rate and others. Wavelet analysis has become a powerful tool for ECG signal processing since it perfectly fits ECG signal shape. The use of these cores has been integrated in the LabVIEW FPGA module development tool that makes possible to employ VHDL cores within the usual LabVIEW graphical programming environment, thus freeing the designer from tedious and time consuming design of communication interfaces. This enables rapid test and graphical representation of results.

  18. Wavelet packet entropy for heart murmurs classification.

    PubMed

    Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Ranga, Sri

    2012-01-01

    Heart murmurs are the first signs of cardiac valve disorders. Several studies have been conducted in recent years to automatically differentiate normal heart sounds, from heart sounds with murmurs using various types of audio features. Entropy was successfully used as a feature to distinguish different heart sounds. In this paper, new entropy was introduced to analyze heart sounds and the feasibility of using this entropy in classification of five types of heart sounds and murmurs was shown. The entropy was previously introduced to analyze mammograms. Four common murmurs were considered including aortic regurgitation, mitral regurgitation, aortic stenosis, and mitral stenosis. Wavelet packet transform was employed for heart sound analysis, and the entropy was calculated for deriving feature vectors. Five types of classification were performed to evaluate the discriminatory power of the generated features. The best results were achieved by BayesNet with 96.94% accuracy. The promising results substantiate the effectiveness of the proposed wavelet packet entropy for heart sounds classification.

  19. Wavelets and their applications past and future

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.

    2009-04-01

    As this is a conference on mathematical tools for defense, I would like to dedicate this talk to the memory of Louis Auslander, who through his insights and visionary leadership, brought powerful new mathematics into DARPA, he has provided the main impetus to the development and insertion of wavelet based processing in defense. My goal here is to describe the evolution of a stream of ideas in Harmonic Analysis, ideas which in the past have been mostly applied for the analysis and extraction of information from physical data, and which now are increasingly applied to organize and extract information and knowledge from any set of digital documents, from text to music to questionnaires. This form of signal processing on digital data, is part of the future of wavelet analysis.

  20. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D; Lanier, R

    2007-10-29

    The investigation of wavelet analysis techniques as a means of filtering the gross-count signal obtained from radiation detectors has shown promise. These signals are contaminated with high frequency statistical noise and significantly varying background radiation levels. Wavelet transforms allow a signal to be split into its constituent frequency components without losing relative timing information. Initial simulations and an injection study have been performed. Additionally, acquisition and analysis software has been written which allowed the technique to be evaluated in real-time under more realistic operating conditions. The technique performed well when compared to more traditional triggering techniques with its performance primarily limited by false alarms due to prominent features in the signal. An initial investigation into the potential rejection and classification of these false alarms has also shown promise.

  1. Wavelet analysis of the impedance cardiogram waveforms

    NASA Astrophysics Data System (ADS)

    Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.

    2012-12-01

    Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.

  2. Orthogonal wavelet moments and their multifractal invariants

    NASA Astrophysics Data System (ADS)

    Uchaev, Dm. V.; Uchaev, D. V.; Malinnikov, V. A.

    2015-02-01

    This paper introduces a new family of moments, namely orthogonal wavelet moments (OWMs), which are orthogonal realization of wavelet moments (WMs). In contrast to WMs with nonorthogonal kernel function, these moments can be used for multiresolution image representation and image reconstruction. The paper also introduces multifractal invariants (MIs) of OWMs which can be used instead of OWMs. Some reconstruction tests performed with noise-free and noisy images demonstrate that MIs of OWMs can also be used for image smoothing, sharpening and denoising. It is established that the reconstruction quality for MIs of OWMs can be better than corresponding orthogonal moments (OMs) and reduces to the reconstruction quality for the OMs if we use the zero scale level.

  3. Propagating unstable wavelets in cardiac tissue

    NASA Astrophysics Data System (ADS)

    Boyle, Patrick M.; Madhavan, Adarsh; Reid, Matthew P.; Vigmond, Edward J.

    2012-01-01

    Solitonlike propagating modes have been proposed for excitable tissue, but have never been measured in cardiac tissue. In this study, we simulate an experimental protocol to elicit these propagating unstable wavelets (PUWs) in a detailed three-dimensional ventricular wedge preparation. PUWs appear as fixed-shape wavelets that propagate only in the direction of cardiac fibers, with conduction velocity approximately 40% slower than normal action potential excitation. We investigate their properties, demonstrating that PUWs are not true solitons. The range of stimuli for which PUWs were elicited was very narrow (several orders of magnitude lower than the stimulus strength itself), but increased with reduced sodium conductance and reduced coupling in nonlongitudinal directions. We show that the phenomenon does not depend on the particular membrane representation used or the shape of the stimulating electrode.

  4. Development of wavelet analysis tools for turbulence

    NASA Technical Reports Server (NTRS)

    Bertelrud, A.; Erlebacher, G.; Dussouillez, PH.; Liandrat, M. P.; Liandrat, J.; Bailly, F. Moret; Tchamitchian, PH.

    1992-01-01

    Presented here is the general framework and the initial results of a joint effort to derive novel research tools and easy to use software to analyze and model turbulence and transition. Given here is a brief review of the issues, a summary of some basic properties of wavelets, and preliminary results. Technical aspects of the implementation, the physical conclusions reached at this time, and current developments are discussed.

  5. Multiscale peak detection in wavelet space.

    PubMed

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-01

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .

  6. Wavelet features in motion data classification

    NASA Astrophysics Data System (ADS)

    Szczesna, Agnieszka; Świtoński, Adam; Słupik, Janusz; Josiński, Henryk; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of motion data classification based on result of multiresolution analysis implemented in form of quaternion lifting scheme. Scheme processes directly on time series of rotations coded in form of unit quaternion signal. In the work new features derived from wavelet energy and entropy are proposed. To validate the approach gait database containing data of 30 different humans is used. The obtained results are satisfactory. The classification has over than 91% accuracy.

  7. Correlation Filtering of Modal Dynamics using the Laplace Wavelet

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Lind, Rick; Brenner, Martin J.

    1997-01-01

    Wavelet analysis allows processing of transient response data commonly encountered in vibration health monitoring tasks such as aircraft flutter testing. The Laplace wavelet is formulated as an impulse response of a single mode system to be similar to data features commonly encountered in these health monitoring tasks. A correlation filtering approach is introduced using the Laplace wavelet to decompose a signal into impulse responses of single mode subsystems. Applications using responses from flutter testing of aeroelastic systems demonstrate modal parameters and stability estimates can be estimated by correlation filtering free decay data with a set of Laplace wavelets.

  8. Wavelet variance analysis for random fields on a regular lattice.

    PubMed

    Mondal, Debashis; Percival, Donald B

    2012-02-01

    There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1-D and 2-D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scale-by-scale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2-D context (in particular, in works by Unser in 1995 on wavelet-based texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a large-sample theory for estimators of 2-D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean.

  9. REVIEWS OF TOPICAL PROBLEMS: Wavelets and their uses

    NASA Astrophysics Data System (ADS)

    Dremin, Igor M.; Ivanov, Oleg V.; Nechitailo, Vladimir A.

    2001-05-01

    This review paper is intended to give a useful guide for those who want to apply the discrete wavelet transform in practice. The notion of wavelets and their use in practical computing and various applications are briefly described, but rigorous proofs of mathematical statements are omitted, and the reader is just referred to the corresponding literature. The multiresolution analysis and fast wavelet transform have become a standard procedure for dealing with discrete wavelets. The proper choice of a wavelet and use of nonstandard matrix multiplication are often crucial for the achievement of a goal. Analysis of various functions with the help of wavelets allows one to reveal fractal structures, singularities etc. The wavelet transform of operator expressions helps solve some equations. In practical applications one often deals with the discretized functions, and the problem of stability of the wavelet transform and corresponding numerical algorithms becomes important. After discussing all these topics we turn to practical applications of the wavelet machinery. They are so numerous that we have to limit ourselves to a few examples only. The authors would be grateful for any comments which would move us closer to the goal proclaimed in the first phrase of the abstract.

  10. Coresident sensor fusion and compression using the wavelet transform

    SciTech Connect

    Yocky, D.A.

    1996-03-11

    Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.

  11. Wavelet-based moment invariants for pattern recognition

    NASA Astrophysics Data System (ADS)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  12. Improved total variation algorithms for wavelet-based denoising

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2007-04-01

    Many improvements of wavelet-based restoration techniques suggest the use of the total variation (TV) algorithm. The concept of combining wavelet and total variation methods seems effective but the reasons for the success of this combination have been so far poorly understood. We propose a variation of the total variation method designed to avoid artifacts such as oil painting effects and is more suited than the standard TV techniques to be implemented with wavelet-based estimates. We then illustrate the effectiveness of this new TV-based method using some of the latest wavelet transforms such as contourlets and shearlets.

  13. A Wavelet Packets Approach to Electrocardiograph Baseline Drift Cancellation

    PubMed Central

    Mozaffary, Behzad

    2006-01-01

    Baseline wander elimination is considered a classical problem. In electrocardiography (ECG) signals, baseline drift can influence the accurate diagnosis of heart disease such as ischemia and arrhythmia. We present a wavelet-transform- (WT-) based search algorithm using the energy of the signal in different scales to isolate baseline wander from the ECG signal. The algorithm computes wavelet packet coefficients and then in each scale the energy of the signal is calculated. Comparison is made and the branch of the wavelet binary tree corresponding to higher energy wavelet spaces is chosen. This algorithm is tested using the data record from MIT/BIH database and excellent results are obtained. PMID:23165064

  14. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  15. Scope and applications of translation invariant wavelets to image registration

    NASA Technical Reports Server (NTRS)

    Chettri, Samir; LeMoigne, Jacqueline; Campbell, William

    1997-01-01

    The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.

  16. Wavelet-based verification of the quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  17. Bayesian Wavelet Shrinkage of the Haar-Fisz Transformed Wavelet Periodogram

    PubMed Central

    2015-01-01

    It is increasingly being realised that many real world time series are not stationary and exhibit evolving second-order autocovariance or spectral structure. This article introduces a Bayesian approach for modelling the evolving wavelet spectrum of a locally stationary wavelet time series. Our new method works by combining the advantages of a Haar-Fisz transformed spectrum with a simple, but powerful, Bayesian wavelet shrinkage method. Our new method produces excellent and stable spectral estimates and this is demonstrated via simulated data and on differenced infant electrocardiogram data. A major additional benefit of the Bayesian paradigm is that we obtain rigorous and useful credible intervals of the evolving spectral structure. We show how the Bayesian credible intervals provide extra insight into the infant electrocardiogram data. PMID:26381141

  18. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  19. Wavelet Technique Applications in Planetary Nebulae Images

    NASA Astrophysics Data System (ADS)

    Leal Ferreira, M. L.; Rabaça, C. R.; Cuisinier, F.; Epitácio Pereira, D. N.

    2009-05-01

    Through the application of the wavelet technique to a planetary nebulae image, we are able to identify different scale sizes structures present in its wavelet coefficient decompositions. In a multiscale vision model, an object is defined as a hierarchical set of these structures. We can then use this model to independently reconstruct the different objects that compose the nebulae. The result is the separation and identification of superposed objects, some of them with very low surface brightness, what makes them, in general, very difficult to be seen in the original images due to the presence of noise. This allows us to make a more detailed analysis of brightness distribution in these sources. In this project, we use this method to perform a detailed morphological study of some planetary nebulae and to investigate whether one of them indeed shows internal temperature fluctuations. We have also conducted a series of tests concerning the reliability of the method and the confidence level of the objects detected. The wavelet code used in this project is called OV_WAV and was developed by the UFRJ's Astronomy Departament team.

  20. Wavelet analysis of radon time series

    NASA Astrophysics Data System (ADS)

    Barbosa, Susana; Pereira, Alcides; Neves, Luis

    2013-04-01

    Radon is a radioactive noble gas with a half-life of 3.8 days ubiquitous in both natural and indoor environments. Being produced in uranium-bearing materials by decay from radium, radon can be easily and accurately measured by nuclear methods, making it an ideal proxy for time-varying geophysical processes. Radon time series exhibit a complex temporal structure and large variability on multiple scales. Wavelets are therefore particularly suitable for the analysis on a scale-by-scale basis of time series of radon concentrations. In this study continuous and discrete wavelet analysis is applied to describe the variability structure of hourly radon time series acquired both indoors and on a granite site in central Portugal. A multi-resolution decomposition is performed for extraction of sub-series associated to specific scales. The high-frequency components are modeled in terms of stationary autoregressive / moving average (ARMA) processes. The amplitude and phase of the periodic components are estimated and tidal features of the signals are assessed. Residual radon concentrations (after removal of periodic components) are further examined and the wavelet spectrum is used for estimation of the corresponding Hurst exponent. The results for the several radon time series considered in the present study are very heterogeneous in terms of both high-frequency and long-term temporal structure indicating that radon concentrations are very site-specific and heavily influenced by local factors.

  1. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  2. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  3. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  4. A Supervised Wavelet Transform Algorithm for R Spike Detection in Noisy ECGs

    NASA Astrophysics Data System (ADS)

    de Lannoy, G.; de Decker, A.; Verleysen, M.

    The wavelet transform is a widely used pre-filtering step for subsequent R spike detection by thresholding of the coefficients. The time-frequency decomposition is indeed a powerful tool to analyze non-stationary signals. Still, current methods use consecutive wavelet scales in an a priori restricted range and may therefore lack adaptativity. This paper introduces a supervised learning algorithm which learns the optimal scales for each dataset using the annotations provided by physicians on a small training set. For each record, this method allows a specific set of non consecutive scales to be selected, based on the record's characteristics. The selected scales are then used for the decomposition of the original long-term ECG signal recording and a hard thresholding rule is applied on the derivative of the wavelet coefficients to label the R spikes. This algorithm has been tested on the MIT-BIH arrhythmia database and obtains an average sensitivity rate of 99.7% and average positive predictivity rate of 99.7%.

  5. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process. PMID:27120649

  6. Denoising of X-ray pulsar observed profile in the undecimated wavelet domain

    NASA Astrophysics Data System (ADS)

    Xue, Meng-fan; Li, Xiao-ping; Fu, Ling-zhong; Liu, Xiu-ping; Sun, Hai-feng; Shen, Li-rong

    2016-01-01

    The low intensity of the X-ray pulsar signal and the strong X-ray background radiation lead to low signal-to-noise ratio (SNR) of the X-ray pulsar observed profile obtained through epoch folding, especially when the observation time is not long enough. This signifies the necessity of denoising of the observed profile. In this paper, the statistical characteristics of the X-ray pulsar signal are studied, and a signal-dependent noise model is established for the observed profile. Based on this, a profile noise reduction method by performing a local linear minimum mean square error filtering in the un-decimated wavelet domain is developed. The detail wavelet coefficients are rescaled by multiplying their amplitudes by a locally adaptive factor, which is the local variance ratio of the noiseless coefficients to the noisy ones. All the nonstationary statistics needed in the algorithm are calculated from the observed profile, without a priori information. The results of experim! ents, carried out on simulated data obtained by the ground-based simulation system and real data obtained by Rossi X-Ray Timing Explorer satellite, indicate that the proposed method is excellent in both noise suppression and preservation of peak sharpness, and it also clearly outperforms four widely accepted and used wavelet denoising methods, in terms of SNR, Pearson correlation coefficient and root mean square error.

  7. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.

  8. Lexical Collocations and Their Impact on the Online Writing of Taiwanese College English Majors and Non-English Majors

    ERIC Educational Resources Information Center

    Hsu, Jeng-yih

    2007-01-01

    The present study investigates the use of English lexical collocations and their relation to the online writing of Taiwanese college English majors and non-English majors. Data for the study were collected from 41 English majors and 21 non-English majors at a national university of science and technology in southern Taiwan. Each student was asked…

  9. Idiomobile for Learners of English: A Study of Learners' Usage of a Mobile Learning Application for Learning Idioms and Collocations

    ERIC Educational Resources Information Center

    Amer, Mahmoud Atiah

    2010-01-01

    This study explored how four groups of English learners used a mobile software application developed by the researcher for learning idiomatic expressions and collocations. A total of 45 learners in the study used the application for a period of one week. Data for this study was collected from a questionnaire, the application, and follow-up…

  10. A Study of Learners' Usage of a Mobile Learning Application for Learning Idioms and Collocations

    ERIC Educational Resources Information Center

    Amer, Mahmoud

    2014-01-01

    This study explored how four groups of language learners used a mobile software application developed by the researcher for learning idiomatic expressions and collocations. A total of 45 participants in the study used the application for a period of one week. Data for this study was collected from the application, a questionnaire, and follow-up…

  11. Stretched Verb Collocations with "Give": Their Use and Translation into Spanish Using the BNC and CREA Corpora

    ERIC Educational Resources Information Center

    Molina-Plaza, Silvia; de Gregorio-Godeo, Eduardo

    2010-01-01

    Within the context of on-going research, this paper explores the pedagogical implications of contrastive analyses of multiword units in English and Spanish based on electronic corpora as a CALL resource. The main tenets of collocations from a contrastive perspective--and the points of contact and departure between both languages--are discussed…

  12. A Corpus-Driven Investigation of Chinese English Learners' Performance of Verb-Noun Collocation: A Case Study of "Ability"

    ERIC Educational Resources Information Center

    Xia, Lixin

    2013-01-01

    The paper makes a contrastive study on the performance of verb-noun collocation given by Chinese EFL learners based on the CLEC, ICLE and BNC. First, all the concordance lines with the token "ability" in the CLEC were collected and analyzed. Then, they were tagged manually in order to sort out the sentences in the verb-noun collocation…

  13. A Computational Approach to Detecting Collocation Errors in the Writing of Non-Native Speakers of English

    ERIC Educational Resources Information Center

    Futagi, Yoko; Deane, Paul; Chodorow, Martin; Tetreault, Joel

    2008-01-01

    This paper describes the first prototype of an automated tool for detecting collocation errors in texts written by non-native speakers of English. Candidate strings are extracted by pattern matching over POS-tagged text. Since learner texts often contain spelling and morphological errors, the tool attempts to automatically correct them in order to…

  14. The Effect of Form versus Meaning-Focused Tasks on the Development of Collocations among Iranian Intermediate EFL Learners

    ERIC Educational Resources Information Center

    Pishghadam, Reza; Khodadady, Ebrahim; Rad, Naeemeh Daliry

    2011-01-01

    This study attempts comprehensively to investigate the effect of form versus meaning-focused tasks on the development of collocations among Iranian Intermediate EFL learners. To this end, 65 students of Mashhad High schools in Iran were selected as the participants. A general language proficiency test of Nelson (book 2, Intermediate 200A) was used…

  15. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    SciTech Connect

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-06-20

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  16. An ADI extrapolated Crank-Nicolson orthogonal spline collocation method for nonlinear reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Fernandes, Ryan I.; Fairweather, Graeme

    2012-08-01

    An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.

  17. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  18. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  19. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  20. Probabilistic collocation method for strongly nonlinear problems: 3. Transform by time

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao

    2016-03-01

    The probabilistic collocation method (PCM) has drawn wide attention for stochastic analysis recently. Its results may become inaccurate in case of a strongly nonlinear relation between random parameters and model responses. To tackle this problem, we proposed a location-based transformed PCM (xTPCM) and a displacement-based transformed PCM (dTPCM) in previous parts of this series. Making use of the transform between response and space, the above two methods, however, have certain limitations. In this study, we introduce a time-based transformed PCM (tTPCM) employing the transform between response and time. We conduct numerical experiments to investigate its performance in uncertainty quantification. The results show that the tTPCM greatly improves the accuracy of the traditional PCM in a cost-effective manner and is more general and convenient than the xTPCM/dTPCM.

  1. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  2. Absorption of Solar Radiation by the Cloudy Atmosphere Interpretations of Collocated Aircraft Measurements

    NASA Technical Reports Server (NTRS)

    Valero, Francisco P. J.; Cess, Robert D.; Zhang, Minghua; Pope, Shelly K.; Bucholtz, Anthony; Bush, Brett; Vitko, John, Jr.

    1997-01-01

    As part of the Atmospheric Radiation Measurement (ARM) Enhanced Shortwave Experiment (ARESE), we have obtained and analyzed measurements made from collocated aircraft of the absorption of solar radiation within the atmospheric column between the two aircraft. The measurements were taken during October 1995 at the ARM site in Oklahoma. Relative to a theoretical radiative transfer model, we find no evidence for excess solar absorption in the clear atmosphere and significant evidence for its existence in the cloudy atmosphere. This excess cloud solar absorption appears to occur in both visible (0.224-0.68 microns) and near-infrared (0.68-3.30 microns) spectral regions, although not at 0.5 microns for the visible contribution, and it is shown to be true absorption rather than an artifact of sampling errors caused by measuring three-dimensional clouds.

  3. An explicit solution to the optimal LQG problem for flexible structures with collocated rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1993-01-01

    We present a class of compensators in explicit form (not requiring numerical computer calculations) for stabilizing flexible structures with collocated rate sensors. They are based on the explicit solution, valid for both Continuum and FEM Models, of the LQG problem for minimizing mean square rate. They are robust with respect to system stability (will not destabilize modes even with mismatch of parameters), can be instrumented in state space form suitable for digital controllers, and can be specified directly from the structure modes and mode 'signature' (displacement vectors at sensor locations). Some simulation results are presented for the NASA LaRC Phase-Zero Evolutionary Model - a modal Trust model with 86 modes - showing damping ratios attainable as a function of compensator design parameters and complexity.

  4. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  5. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical

  6. Tropospheric refractivity and zenith path delays from least-squares collocation of meteorological and GNSS data

    NASA Astrophysics Data System (ADS)

    Wilgan, Karina; Hurter, Fabian; Geiger, Alain; Rohm, Witold; Bosy, Jarosław

    2016-08-01

    Precise positioning requires an accurate a priori troposphere model to enhance the solution quality. Several empirical models are available, but they may not properly characterize the state of troposphere, especially in severe weather conditions. Another possible solution is to use regional troposphere models based on real-time or near-real time measurements. In this study, we present the total refractivity and zenith total delay (ZTD) models based on a numerical weather prediction (NWP) model, Global Navigation Satellite System (GNSS) data and ground-based meteorological observations. We reconstruct the total refractivity profiles over the western part of Switzerland and the total refractivity profiles as well as ZTDs over Poland using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zürich. In these two case studies, profiles of the total refractivity and ZTDs are calculated from different data sets. For Switzerland, the data set with the best agreement with the reference radiosonde (RS) measurements is the combination of ground-based meteorological observations and GNSS ZTDs. Introducing the horizontal gradients does not improve the vertical interpolation, and results in slightly larger biases and standard deviations. For Poland, the data set based on meteorological parameters from the NWP Weather Research and Forecasting (WRF) model and from a combination of the NWP model and GNSS ZTDs shows the best agreement with the reference RS data. In terms of ZTD, the combined NWP-GNSS observations and GNSS-only data set exhibit the best accuracy with an average bias (from all stations) of 3.7 mm and average standard deviations of 17.0 mm w.r.t. the reference GNSS stations.

  7. Estimating error cross-correlations in soil moisture data sets using extended collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Crow, W. T.; Zwieback, S.; Dorigo, W. A.; Wagner, W.

    2016-02-01

    Global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multisource soil moisture retrievals into a unified data set. However, this requires an appropriate parameterization of the error structures of the underlying data sets. While triple collocation (TC) analysis has been widely recognized as a powerful tool for estimating random error variances of coarse-resolution soil moisture data sets, the estimation of error cross covariances remains an unresolved challenge. Here we propose a method—referred to as extended collocation (EC) analysis—for estimating error cross-correlations by generalizing the TC method to an arbitrary number of data sets and relaxing the therein made assumption of zero error cross-correlation for certain data set combinations. A synthetic experiment shows that EC analysis is able to reliably recover true error cross-correlation levels. Applied to real soil moisture retrievals from Advanced Microwave Scanning Radiometer-EOS (AMSR-E) C-band and X-band observations together with advanced scatterometer (ASCAT) retrievals, modeled data from Global Land Data Assimilation System (GLDAS)-Noah and in situ measurements drawn from the International Soil Moisture Network, EC yields reasonable and strong nonzero error cross-correlations between the two AMSR-E products. Against expectation, nonzero error cross-correlations are also found between ASCAT and AMSR-E. We conclude that the proposed EC method represents an important step toward a fully parameterized error covariance matrix for coarse-resolution soil moisture data sets, which is vital for any rigorous data assimilation framework or data merging scheme.

  8. A Study on the pH and conductivity of rural rainfall employing two collocated samplers

    NASA Astrophysics Data System (ADS)

    Sequeira, R.; Lai, C. C.; Peart, M. R.

    1999-02-01

    A set of about 100 daily rainfall samples were collected over a period of about one year during the 1995-1996 period using two collocated, automated samplers placed ˜4 m apart at the rural Kadoorie Agricultural Research Centre (KARC) in Hong Kong. The p H and conductivity of the rainwater were measured immediately after sample collection. There is a strong correlation between the two free hydrogen ion concentrations (R2 ≈ 0.92) and an even stronger one between the conductivities (R2 ≈ 0.99). Statistically, there is no difference at the 0.05 level of significance between the means of either the two free hydrogen ion concentrations or the two conductivities. The conductivity results suggest that the total dissolved solids in the two samplers is probably quite similar in magnitude. No relationship is observed between the free acid content and daily rainfall volume in either sampler, a result similar to that obtained in previous studies involving bulk fall at the KARC and wet fall in urban Hong Kong as a whole. A weak hyperbolic relationship exists between the rainfall volume and the conductivity, and their log-log plot indicates only a somewhat weak inverse linear relationship, with correlation coefficients of -0.60 and -0.61 for the two samplers, considered individually. Finally, the unbiased estimates of the product of rainfall volume and conductivity for the collocated samples suggest that the microscale variability (≳4 m) of the mean wet mass flux of total dissolved material in rural Hong Kong rainfall is negligible.

  9. Compression of echocardiographic scan line data using wavelet packet transform

    NASA Technical Reports Server (NTRS)

    Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.

    2001-01-01

    An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.

  10. Schrödinger like equation for wavelets

    NASA Astrophysics Data System (ADS)

    Zúñiga-Segundo, A.; Moya-Cessa, H. M.; Soto-Eguibar, F.

    2016-01-01

    An explicit phase space representation of the wave function is build based on a wavelet transformation. The wavelet transformation allows us to understand the relationship between s - ordered Wigner function, (or Wigner function when s = 0), and the Torres-Vega-Frederick's wave functions. This relationship is necessary to find a general solution of the Schrödinger equation in phase-space.

  11. Wavelet based feature extraction and visualization in hyperspectral tissue characterization

    PubMed Central

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-01-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p <0.02) between the chosen tissue parameters and the selected wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition. PMID:25574437

  12. Wavelet Analysis of Satellite Images for Coastal Watch

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Peng, Chich Y.; Chang, Steve Y.-S.

    1997-01-01

    The two-dimensional wavelet transform is a very efficient bandpass filter, which can be used to separate various scales of processes and show their relative phase/location. In this paper, algorithms and techniques for automated detection and tracking of mesoscale features from satellite imagery employing wavelet analysis are developed. The wavelet transform has been applied to satellite images, such as those from synthetic aperture radar (SAR), advanced very-high-resolution radiometer (AVHRR), and coastal zone color scanner (CZCS) for feature extraction. The evolution of mesoscale features such as oil slicks, fronts, eddies, and ship wakes can be tracked by the wavelet analysis using satellite data from repeating paths. Several examples of the wavelet analysis applied to various satellite Images demonstrate the feasibility of this technique for coastal monitoring.

  13. Combining Wavelet Transform and Hidden Markov Models for ECG Segmentation

    NASA Astrophysics Data System (ADS)

    Andreão, Rodrigo Varejão; Boudy, Jérôme

    2006-12-01

    This work aims at providing new insights on the electrocardiogram (ECG) segmentation problem using wavelets. The wavelet transform has been originally combined with a hidden Markov models (HMMs) framework in order to carry out beat segmentation and classification. A group of five continuous wavelet functions commonly used in ECG analysis has been implemented and compared using the same framework. All experiments were realized on the QT database, which is composed of a representative number of ambulatory recordings of several individuals and is supplied with manual labels made by a physician. Our main contribution relies on the consistent set of experiments performed. Moreover, the results obtained in terms of beat segmentation and premature ventricular beat (PVC) detection are comparable to others works reported in the literature, independently of the type of the wavelet. Finally, through an original concept of combining two wavelet functions in the segmentation stage, we achieve our best performances.

  14. Application of harmonic wavelet to filtering of rockbolt detecting signal

    NASA Astrophysics Data System (ADS)

    Zhao, Yucheng; Liu, Hongyan; Wang, Jiyan; Miao, Xiexing

    2008-11-01

    Harmonic wavelet had explicit functional expression, flexible time-frequency division, simple transforming algorithm and a finer frequency refinement function than the others wavelet. In this paper based on frequency distributing characteristic of nondestructive testing signal from rockbolt supporting system, the discrete harmonic wavelet transforming theory was used to get rid of the lower and higher frequency signal from the initial signal. Meanwhile, the reconstruction algorithm of harmonic wavelet was brought forward to gain the signal without the unnecessary bandwidth signals. Finally, a numerical signal and real signal which can demonstrate superiority of harmonic wavelet in filtering are presented, and the transforming result shows that it would make the system run more precise and stably in the detecting to the quality of rockbolt supporting system.

  15. A multiresolution wavelet representation in two or more dimensions

    NASA Technical Reports Server (NTRS)

    Bromley, B. C.

    1992-01-01

    In the multiresolution approximation, a signal is examined on a hierarchy of resolution scales by projection onto sets of smoothing functions. Wavelets are used to carry the detail information connecting adjacent sets in the resolution hierarchy. An algorithm has been implemented to perform a multiresolution decomposition in n greater than or equal to 2 dimensions based on wavelets generated from products of 1-D wavelets and smoothing functions. The functions are chosen so that an n-D wavelet may be associated with a single resolution scale and orientation. The algorithm enables complete reconstruction of a high resolution signal from decomposition coefficients. The signal may be oversampled to accommodate non-orthogonal wavelet systems, or to provide approximate translational invariance in the decomposition arrays.

  16. A Multiscale Wavelet Solver with O( n) Complexity

    NASA Astrophysics Data System (ADS)

    Williams, John R.; Amaratunga, Kevin

    1995-11-01

    In this paper, we use the biorthogonal wavelets recently constructed by Dahlke and Weinreich to implement a highly efficient procedure for solving a certain class of one-dimensional problems, (∂21/∂x21)u = f,I ɛ Z, I > 0. For these problems, the discrete biorthogonal wavelet transform allows us to set up a system of wavelet-Galerkin equations in which the scales are uncoupled, so that a true multiscale solution procedure may be formulated. We prove that the resulting stiffness matrix is in fact an almost perfectly diagonal matrix (the original aim of the construction was to achieve a block diagonal structure) and we show that this leads to an algorithm whose cost is O(n). We also present numerical results which demonstrate that the multiscale biorthogonal wavelet algorithm is superior to the more conventional single scale orthogonal wavelet approach both in terms of speed and in terms of convergence.

  17. An image fusion method based on biorthogonal wavelet

    NASA Astrophysics Data System (ADS)

    Li, Jianlin; Yu, Jiancheng; Sun, Shengli

    2008-03-01

    Image fusion could process and utilize the source images, with complementing different image information, to achieve the more objective and essential understanding of the identical object. Recently, image fusion has been extensively applied in many fields such as medical imaging, micro photographic imaging, remote sensing, and computer vision as well as robot. There are various methods have been proposed in the past years, such as pyramid decomposition and wavelet transform algorithm. As for wavelet transform algorithm, due to the virtue of its multi-resolution, wavelet transform has been applied in image processing successfully. Another advantage of wavelet transform is that it can be much more easily realized in hardware, because its data format is very simple, so it could save a lot of resources, besides, to some extent, it can solve the real-time problem of huge-data image fusion. However, as the orthogonal filter of wavelet transform doesn't have the characteristics of linear phase, the phase distortion will lead to the distortion of the image edge. To make up for this shortcoming, the biorthogonal wavelet is introduced here. So, a novel image fusion scheme based on biorthogonal wavelet decomposition is presented in this paper. As for the low-frequency and high-frequency wavelet decomposition coefficients, the local-area-energy-weighted-coefficient fusion rule is adopted and different thresholds of low-frequency and high-frequency are set. Based on biorthogonal wavelet transform and traditional pyramid decomposition algorithm, an MMW image and a visible image are fused in the experiment. Compared with the traditional pyramid decomposition, the fusion scheme based biorthogonal wavelet is more capable to retain and pick up image information, and make up the distortion of image edge. So, it has a wide application potential.

  18. Image encryption in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Bao, Long; Zhou, Yicong; Chen, C. L. Philip

    2013-05-01

    Most existing image encryption algorithms often transfer the original image into a noise-like image which is an apparent visual sign indicating the presence of an encrypted image. Motivated by the data hiding technologies, this paper proposes a novel concept of image encryption, namely transforming an encrypted original image into another meaningful image which is the final resulting encrypted image and visually the same as the cover image, overcoming the mentioned problem. Using this concept, we introduce a new image encryption algorithm based on the wavelet decomposition. Simulations and security analysis are given to show the excellent performance of the proposed concept and algorithm.

  19. Lung tissue classification using wavelet frames.

    PubMed

    Depeursinge, Adrien; Sage, Daniel; Hidki, Asmâa; Platon, Alexandra; Poletti, Pierre-Alexandre; Unser, Michael; Müller, Henning

    2007-01-01

    We describe a texture classification system that identifies lung tissue patterns from high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD). This pattern recognition task is part of an image-based diagnostic aid system for ILDs. Five lung tissue patterns (healthy, emphysema, ground glass, fibrosis and microdules) selected from a multimedia database are classified using the overcomplete discrete wavelet frame decompostion combined with grey-level histogram features. The overall multiclass accuracy reaches 92.5% of correct matches while combining the two types of features, which are found to be complementary. PMID:18003452

  20. Spike detection using the continuous wavelet transform.

    PubMed

    Nenadic, Zoran; Burdick, Joel W

    2005-01-01

    This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution. PMID:15651566

  1. Lung tissue classification using wavelet frames.

    PubMed

    Depeursinge, Adrien; Sage, Daniel; Hidki, Asmâa; Platon, Alexandra; Poletti, Pierre-Alexandre; Unser, Michael; Müller, Henning

    2007-01-01

    We describe a texture classification system that identifies lung tissue patterns from high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD). This pattern recognition task is part of an image-based diagnostic aid system for ILDs. Five lung tissue patterns (healthy, emphysema, ground glass, fibrosis and microdules) selected from a multimedia database are classified using the overcomplete discrete wavelet frame decompostion combined with grey-level histogram features. The overall multiclass accuracy reaches 92.5% of correct matches while combining the two types of features, which are found to be complementary.

  2. Simulation-based design using wavelets

    NASA Astrophysics Data System (ADS)

    Williams, John R.; Amaratunga, Kevin S.

    1994-03-01

    The design of large-scale systems requires methods of analysis which have the flexibility to provide a fast interactive simulation capability, while retaining the ability to provide high-order solution accuracy when required. This suggests that a hierarchical solution procedure is required that allows us to trade off accuracy for solution speed in a rational manner. In this paper, we examine the properties of the biorthogonal wavelets recently constructed by Dahlke and Weinreich and show how they can be used to implement a highly efficient multiscale solution procedure for solving a certain class of one-dimensional problems.

  3. Generalized Morse wavelets for the phase evaluation of projected fringe pattern

    NASA Astrophysics Data System (ADS)

    Kocahan Yılmaz, Özlem; Coşkun, Emre; Özder, Serhat

    2014-10-01

    Generalized Morse wavelets are proposed to evaluate the phase information from projected fringe pattern with the spatial carrier frequency in the x direction. The height profile of the object is determined through the phase change distribution by using the phase of the continuous wavelet transform. The choice of an appropriate mother wavelet is an important step for the calculation of phase. As a mother wavelet, zero order generalized Morse wavelet is chosen because of the flexible spatial and frequency localization property, and it is exactly analytic. Experimental results for the Morlet and Paul wavelets are compared with the results of generalized Morse wavelets analysis.

  4. Variability analysis of device-level photonics using stochastic collocation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xing, Yufei; Spina, Domenico; Li, Ang; Dhaene, Tom; Bogaerts, Wim

    2016-05-01

    Abstract Integrated photonics, and especially silicon photonics, has been rapidly expanded its catalog of building blocks and functionalities. Now, it is maturing fast towards circuit-level integration to serve more complex applications in industry. However, performance variability due to the fabrication process and operational conditions can limit the yield of large-scale circuits. It is essential to assess this impact at the design level with an efficient variability analysis: how variations in geometrical, electrical and optical parameters propagate into components performance. In particular when implementing wavelength-selective filters, many primary functional parameters are affected by fabrication-induced variability. The key functional parameters that we assess in this paper are the waveguide propagation constant (the effective index, essential to define the exact length of a delay line) and the coupling coefficients in coupling structure (necessary to set the power distribution over different delay lines). The Monte Carlo (MC) method is the standard method for variability analysis, thanks to its accuracy and easy implementation. However, due to its slow convergence, it requires a large set of samples (simulations or measurements), making it computationally or experimentally expensive. More efficient methods to assess such variability can be used, such as generalized polynomial chaos (gPC) expansion or stochastic collocation. In this paper, we demonstrate stochastic collocation (SC) as an efficient alternative to MC or gPC to characterize photonic devices under the effect of uncertainty. The idea of SC is to interpolate stochastic solutions in the random space by interpolation polynomials. After sampling the deterministic problem at a pre-defined set of nodes in random space, the interpolation is constructed. SC drastically reduces computation and measurement cost. Also, like MC method, sampling-based SC is easy to implement. Its computation cost can be

  5. Facial Feature Extraction Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Viet

    Facial feature extraction is one of the most important processes in face recognition, expression recognition and face detection. The aims of facial feature extraction are eye location, shape of eyes, eye brow, mouth, head boundary, face boundary, chin and so on. The purpose of this paper is to develop an automatic facial feature extraction system, which is able to identify the eye location, the detailed shape of eyes and mouth, chin and inner boundary from facial images. This system not only extracts the location information of the eyes, but also estimates four important points in each eye, which helps us to rebuild the eye shape. To model mouth shape, mouth extraction gives us both mouth location and two corners of mouth, top and bottom lips. From inner boundary we obtain and chin, we have face boundary. Based on wavelet features, we can reduce the noise from the input image and detect edge information. In order to extract eyes, mouth, inner boundary, we combine wavelet features and facial character to design these algorithms for finding midpoint, eye's coordinates, four important eye's points, mouth's coordinates, four important mouth's points, chin coordinate and then inner boundary. The developed system is tested on Yale Faces and Pedagogy student's faces.

  6. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  7. Continuous wavelet transform in quantum field theory

    NASA Astrophysics Data System (ADS)

    Altaisky, M. V.; Kaputkina, N. E.

    2013-07-01

    We describe the application of the continuous wavelet transform to calculation of the Green functions in quantum field theory: scalar ϕ4 theory, quantum electrodynamics, and quantum chromodynamics. The method of continuous wavelet transform in quantum field theory, presented by Altaisky [Phys. Rev. D 81, 125003 (2010)] for the scalar ϕ4 theory, consists in substitution of the local fields ϕ(x) by those dependent on both the position x and the resolution a. The substitution of the action S[ϕ(x)] by the action S[ϕa(x)] makes the local theory into a nonlocal one and implies the causality conditions related to the scale a, the region causality [J. D. Christensen and L. Crane, J. Math. Phys. (N.Y.) 46, 122502 (2005)]. These conditions make the Green functions G(x1,a1,…,xn,an)=⟨ϕa1(x1)…ϕan(xn)⟩ finite for any given set of regions by means of an effective cutoff scale A=min⁡(a1,…,an).

  8. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows.

    PubMed

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential

  9. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows

    NASA Astrophysics Data System (ADS)

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential

  10. Upper atmospheric processes as measured by collocated Lidar, infrasound, radiometer and airglow measurements

    NASA Astrophysics Data System (ADS)

    Le Pichon, A.; Blanc, E.; Assink, J. D.; Ceranna, L.; Pilger, C.; Ross, O.; Keckhut, P.; Hauchecorne, A.; Schmidt, C.; Bittner, M.; Wuest, S.; Rüfenacht, R.; Kaempfer, N.; Smets, P.

    2013-12-01

    To better initialize weather forecasting systems, a key challenge is to understand stratosphere-resolving climate models. The ARISE project (http://arise-project.eu/) aims to design a novel infrastructure integrating different atmospheric observation networks to accurately recover the vertical structure of the wind and temperature from the ground to the mesosphere. This network includes Lidar and mesospheric airglow observations, complemented by continuous infrasound measurements. Together with additional ground-based wind radar system, such complementary techniques help to better describe the interaction between atmospheric layers from the ground to the mesosphere and the influence of large scale waves on the atmospheric dynamics. Systematic comparisons between these observations and the ECMWF upper wind and temperature models (http://www.ecmwf.int/) have been performed at the OHP site (Haute-Provence Observatory, France). The main results are outlined below. - Systematic comparisons between Lidar soundings (NDACC, http://ndacc-lidar.org/) and ECMWF highlight differences increasing with altitude. Below 50 km altitude, differences are as large as 20°K. In average, the temperature appears to be overestimated by ~5 m/s in the stratosphere and underestimated by ~10 m/s in the mesopause. - Comparisons with collocated infrasound measurements provide additional useful integrated information about the structure of the stratospheric waveguide. Below 0.5 Hz, most infrasound signals originate from ocean swells in the North Atlantic region. As expected, since most long-range propagating signals travel in the stratospheric waveguide, improved detection capability occurs downwind. Deviations from this trend are either related to short time-scale variability of the atmosphere (e.g., large-scale planetary waves, stratospheric warming effects), or can be explained by changes in the nature of the source. We investigate possible correlation between unexpected propagation paths and

  11. An Analysis of Peak Wind Speed Data from Collocated Mechanical and Ultrasonic Anemometers

    NASA Technical Reports Server (NTRS)

    Short, David A.; Wells, Leonard; Merceret, Francis J.; Roeder, William P.

    2007-01-01

    This study compared peak wind speeds reported by mechanical and ultrasonic anemometers at Cape Canaveral Air Force Station and Kennedy Space Center (CCAFS/KSC) on the east central coast of Florida and Vandenberg Air Force Base (VAFB) on the central coast of California. Launch Weather Officers, forecasters, and Range Safety analysts need to understand the performance of wind sensors at CCAFS/KSC and VAFB for weather warnings, watches, advisories, special ground processing operations, launch pad exposure forecasts, user Launch Commit Criteria (LCC) forecasts and evaluations, and toxic dispersion support. The legacy CCAFS/KSC and VAFB weather tower wind instruments are being changed from propeller-and-vane (CCAFS/KSC) and cup-and-vane (VAFB) sensors to ultrasonic sensors under the Range Standardization and Automation (RSA) program. Mechanical and ultrasonic wind measuring techniques are known to cause differences in the statistics of peak wind speed as shown in previous studies. The 45th Weather Squadron (45 WS) and the 30th Weather Squadron (30 WS) requested the Applied Meteorology Unit (AMU) to compare data between the RSA ultrasonic and legacy mechanical sensors to determine if there are significant differences. Note that the instruments were sited outdoors under naturally varying conditions and that this comparison was not designed to verify either technology. Approximately 3 weeks of mechanical and ultrasonic wind data from each range from May and June 2005 were used in this study. The CCAFS/KSC data spanned the full diurnal cycle, while the VAFB data were confined to 1000-1600 local time. The sample of 1-minute data from numerous levels on five different towers on each range totaled more than 500,000 minutes of data (482,979 minutes of data after quality control). The ten towers were instrumented at several levels, ranging from 12 ft to 492 ft above ground level. The ultrasonic sensors were collocated at the same vertical levels as the mechanical sensors and

  12. Chebyshev collocation spectral lattice Boltzmann method for simulation of low-speed flows.

    PubMed

    Hejranfar, Kazem; Hajihassanpour, Mahya

    2015-01-01

    In this study, the Chebyshev collocation spectral lattice Boltzmann method (CCSLBM) is developed and assessed for the computation of low-speed flows. Both steady and unsteady flows are considered here. The discrete Boltzmann equation with the Bhatnagar-Gross-Krook approximation based on the pressure distribution function is considered and the space discretization is performed by the Chebyshev collocation spectral method to achieve a highly accurate flow solver. To provide accurate unsteady solutions, the time integration of the temporal term in the lattice Boltzmann equation is made by the fourth-order Runge-Kutta scheme. To achieve numerical stability and accuracy, physical boundary conditions based on the spectral solution of the governing equations implemented on the boundaries are used. An iterative procedure is applied to provide consistent initial conditions for the distribution function and the pressure field for the simulation of unsteady flows. The main advantage of using the CCSLBM over other high-order accurate lattice Boltzmann method (LBM)-based flow solvers is the decay of the error at exponential rather than at polynomial rates. Note also that the CCSLBM applied does not need any numerical dissipation or filtering for the solution to be stable, leading to highly accurate solutions. Three two-dimensional (2D) test cases are simulated herein that are a regularized cavity, the Taylor vortex problem, and doubly periodic shear layers. The results obtained for these test cases are thoroughly compared with the analytical and available numerical results and show excellent agreement. The computational efficiency of the proposed solution methodology based on the CCSLBM is also examined by comparison with those of the standard streaming-collision (classical) LBM and two finite-difference LBM solvers. The study indicates that the CCSLBM provides more accurate and efficient solutions than these LBM solvers in terms of CPU and memory usage and an exponential

  13. A Comprehensive Noise Robust Speech Parameterization Algorithm Using Wavelet Packet Decomposition-Based Denoising and Speech Feature Representation Techniques

    NASA Astrophysics Data System (ADS)

    Kotnik, Bojan; Kačič, Zdravko

    2007-12-01

    This paper concerns the problem of automatic speech recognition in noise-intense and adverse environments. The main goal of the proposed work is the definition, implementation, and evaluation of a novel noise robust speech signal parameterization algorithm. The proposed procedure is based on time-frequency speech signal representation using wavelet packet decomposition. A new modified soft thresholding algorithm based on time-frequency adaptive threshold determination was developed to efficiently reduce the level of additive noise in the input noisy speech signal. A two-stage Gaussian mixture model (GMM)-based classifier was developed to perform speech/nonspeech as well as voiced/unvoiced classification. The adaptive topology of the wavelet packet decomposition tree based on voiced/unvoiced detection was introduced to separately analyze voiced and unvoiced segments of the speech signal. The main feature vector consists of a combination of log-root compressed wavelet packet parameters, and autoregressive parameters. The final output feature vector is produced using a two-staged feature vector postprocessing procedure. In the experimental framework, the noisy speech databases Aurora 2 and Aurora 3 were applied together with corresponding standardized acoustical model training/testing procedures. The automatic speech recognition performance achieved using the proposed noise robust speech parameterization procedure was compared to the standardized mel-frequency cepstral coefficient (MFCC) feature extraction procedures ETSI ES 201 108 and ETSI ES 202 050.

  14. Wavelet Methods Developed to Detect and Control Compressor Stall

    NASA Technical Reports Server (NTRS)

    Le, Dzu K.

    1997-01-01

    A "wavelet" is, by definition, an amplitude-varying, short waveform with a finite bandwidth (e.g., that shown in the first two graphs). Naturally, wavelets are more effective than the sinusoids of Fourier analysis for matching and reconstructing signal features. In wavelet transformation and inversion, all transient or periodic data features (as in compressor-inlet pressures) can be detected and reconstructed by stretching or contracting a single wavelet to generate the matching building blocks. Consequently, wavelet analysis provides many flexible and effective ways to reduce noise and extract signals which surpass classical techniques - making it very attractive for data analysis, modeling, and active control of stall and surge in high-speed turbojet compressors. Therefore, fast and practical wavelet methods are being developed in-house at the NASA Lewis Research Center to assist in these tasks. This includes establishing user-friendly links between some fundamental wavelet analysis ideas and the classical theories (or practices) of system identification, data analysis, and processing.

  15. On-Line Loss of Control Detection Using Wavelets

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J. (Technical Monitor); Thompson, Peter M.; Klyde, David H.; Bachelder, Edward N.; Rosenthal, Theodore J.

    2005-01-01

    Wavelet transforms are used for on-line detection of aircraft loss of control. Wavelet transforms are compared with Fourier transform methods and shown to more rapidly detect changes in the vehicle dynamics. This faster response is due to a time window that decreases in length as the frequency increases. New wavelets are defined that further decrease the detection time by skewing the shape of the envelope. The wavelets are used for power spectrum and transfer function estimation. Smoothing is used to tradeoff the variance of the estimate with detection time. Wavelets are also used as front-end to the eigensystem reconstruction algorithm. Stability metrics are estimated from the frequency response and models, and it is these metrics that are used for loss of control detection. A Matlab toolbox was developed for post-processing simulation and flight data using the wavelet analysis methods. A subset of these methods was implemented in real time and named the Loss of Control Analysis Tool Set or LOCATS. A manual control experiment was conducted using a hardware-in-the-loop simulator for a large transport aircraft, in which the real time performance of LOCATS was demonstrated. The next step is to use these wavelet analysis tools for flight test support.

  16. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  17. Modified foreground segmentation for object tracking using wavelets in a tensor framework

    NASA Astrophysics Data System (ADS)

    Kapoor, Rajiv; Rohilla, Rajesh

    2015-09-01

    Subspace-based techniques have become important in behaviour analysis, appearance modelling and tracking. Various vector and tensor subspace learning techniques are already known that perform their operations in offline as well as in an online manner. In this work, we have improved upon a tensor-based subspace learning by using fourth-order decomposition and wavelets so as to have an advanced adaptive algorithm for robust and efficient background modelling and tracking in coloured video sequences. The proposed algorithm known as fourth-order incremental tensor subspace learning algorithm uses the spatio-colour-temporal information by adaptive online update of the means and the eigen basis for each unfolding matrix using tensor decomposition to fourth-order image tensors. The proposed method employs the wavelet transformation to an optimum decomposition level in order to reduce the computational complexity by working on the approximate counterpart of the original scenes and also reduces noise in the given scene. Our tracking method is an unscented particle filter that utilises appearance knowledge and estimates the new state of the intended object. Various experiments have been performed to demonstrate the promising and convincing nature of the proposed method and the method works better than existing methods.

  18. Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation

    PubMed Central

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  19. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    SciTech Connect

    Webster, Clayton; Tempone, Raul; Nobile, Fabio

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  20. Using the Stochastic Collocation Method for the Uncertainty Quantification of Drug Concentration Due to Depot Shape Variability

    PubMed Central

    Preston, J. Samuel; Tasdizen, Tolga; Terry, Christi M.; Cheung, Alfred K.

    2010-01-01

    Numerical simulations entail modeling assumptions that impact outcomes. Therefore, characterizing, in a probabilistic sense, the relationship between the variability of model selection and the variability of outcomes is important. Under certain assumptions, the stochastic collocation method offers a computationally feasible alternative to traditional Monte Carlo approaches for assessing the impact of model and parameter variability. We propose a framework that combines component shape parameterization with the stochastic collocation method to study the effect of drug depot shape variability on the outcome of drug diffusion simulations in a porcine model. We use realistic geometries segmented from MR images and employ level-set techniques to create two alternative univariate shape parameterizations. We demonstrate that once the underlying stochastic process is characterized, quantification of the introduced variability is quite straightforward and provides an important step in the validation and verification process. PMID:19272865

  1. On the Daubechies-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a Daubechies-based wavelet basis is constructed and superconvergence is proven. That is, it will be proven that under the assumption of periodic boundary conditions that the differentiation matrix is accurate of order 2M, even though the approximation subspace can represent exactly only polynomials up to degree M-1, where M is the number of vanishing moments of the associated wavelet. It is illustrated that Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small-scale structure is present.

  2. Medical image fusion by wavelet transform modulus maxima

    NASA Astrophysics Data System (ADS)

    Guihong, Qu; Dali, Zhang; Pingfan, Yan

    2001-08-01

    Medical image fusion has been used to derive useful information from multimodality medical image data. In this research, we propose a novel method for multimodality medical image fusion. Using wavelet transform, we achieved a fusion scheme. Afusion rule is proposed and used for calculating the wavelet transformation modulus maxima of input images at different bandwidths and levels. To evaluate the fusion result, a metric based on mutual information (MI) is presented for measuring fusion effect. The performances of other two methods of image fusion based on wavelet transform are briefly described for comparison. The experiment results demonstrate the effectiveness of the fusion scheme.

  3. Wavelet analysis and scaling properties of time series

    NASA Astrophysics Data System (ADS)

    Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.

    2005-10-01

    We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.

  4. EEG seizure identification by using optimized wavelet decomposition.

    PubMed

    Pinzon-Morales, R D; Orozco-Gutierrez, A; Castellanos-Dominguez, G

    2011-01-01

    A methodology for wavelet synthesis based on lifting scheme and genetic algorithms is presented. Often, the wavelet synthesis is addressed to solve the problem of choosing properly a wavelet function from an existing library, but which may be not specially designed to the application in hand. The task under consideration is the identification of epileptic seizures over electroencephalogram recordings. Although basic classifiers are employed, results rendered that the proposed methodology is successful in the considered study achieving similar classification rates that had been reported in literature. PMID:22254892

  5. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  6. SYSTEM IDENTIFICATION OF THE LINAC RF SYSTEM USING A WAVELET METHOD AND ITS APPLICATIONS IN THE SNS LLRF CONTROL SYSTEM

    SciTech Connect

    Y. WANG; S. KWON; ET AL

    2001-06-01

    For a pulsed LINAC such as the SNS, an adaptive feed-forward algorithm plays an important role in reducing the repetitive disturbance caused by the pulsed operation conditions. In most modern feed-forward control algorithms, accurate real time system identification is required to make the algorithm more effective. In this paper, an efficient wavelet method is applied to the system identification in which the Haar function is used as the base wavelet. The advantage of this method is that the Fourier transform of the Haar function in the time domain is a sine function in the frequency domain. Thus we can directly obtain the system transfer function in the frequency domain from the coefficients of the time domain system response.

  7. A self-consistent estimate for linear viscoelastic polycrystals with internal variables inferred from the collocation method

    NASA Astrophysics Data System (ADS)

    Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.

    2012-03-01

    The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.

  8. Composite multi-modal vibration control for a stiffened plate using non-collocated acceleration sensor and piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Li, Shengquan; Li, Juan; Mo, Yueping; Zhao, Rong

    2014-01-01

    A novel active method for multi-mode vibration control of an all-clamped stiffened plate (ACSP) is proposed in this paper, using the extended-state-observer (ESO) approach based on non-collocated acceleration sensors and piezoelectric actuators. Considering the estimated capacity of ESO for system state variables, output superposition and control coupling of other modes, external excitation, and model uncertainties simultaneously, a composite control method, i.e., the ESO based vibration control scheme, is employed to ensure the lumped disturbances and uncertainty rejection of the closed-loop system. The phenomenon of phase hysteresis and time delay, caused by non-collocated sensor/actuator pairs, degrades the performance of the control system, even inducing instability. To solve this problem, a simple proportional differential (PD) controller and acceleration feed-forward with an output predictor design produce the control law for each vibration mode. The modal frequencies, phase hysteresis loops and phase lag values due to non-collocated placement of the acceleration sensor and piezoelectric patch actuator are experimentally obtained, and the phase lag is compensated by using the Smith Predictor technology. In order to improve the vibration control performance, the chaos optimization method based on logistic mapping is employed to auto-tune the parameters of the feedback channel. The experimental control system for the ACSP is tested using the dSPACE real-time simulation platform. Experimental results demonstrate that the proposed composite active control algorithm is an effective approach for suppressing multi-modal vibrations.

  9. Semi-active damping with negative stiffness for multi-mode cable vibration mitigation: approximate collocated control solution

    NASA Astrophysics Data System (ADS)

    Weber, F.; Distl, H.

    2015-11-01

    This paper derives an approximate collocated control solution for the mitigation of multi-mode cable vibration by semi-active damping with negative stiffness based on the control force characteristics of clipped linear quadratic regulator (LQR). The control parameters are derived from optimal modal viscous damping and corrected in order to guarantee that both the equivalent viscous damping coefficient and the equivalent stiffness coefficient of the semi-active cable damper force are equal to their desired counterparts. The collocated control solution with corrected control parameters is numerically validated by free decay tests of the first four cable modes and combinations of these modes. The results of the single-harmonic tests demonstrate that the novel approach yields 1.86 times more cable damping than optimal modal viscous damping and 1.87 to 2.33 times more damping compared to a passive oil damper whose viscous damper coefficient is optimally tuned to the targeted mode range of the first four modes. The improvement in case of the multi-harmonic vibration tests, i.e. when modes 1 and 3 and modes 2 and 4 are vibrating at the same time, is between 1.55 and 3.81. The results also show that these improvements are obtained almost independent of the cable anti-node amplitude. Thus, the proposed approximate real-time applicable collocated semi-active control solution which can be realized by magnetorheological dampers represents a promising tool for the efficient mitigation of stay cable vibrations.

  10. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature.

  11. Highly scalable differential JPEG 2000 wavelet video codec for Internet video streaming

    NASA Astrophysics Data System (ADS)

    Zhao, Lifeng; Kim, JongWon; Bao, Yiliang; Kuo, C.-C. Jay

    2000-12-01

    A highly scalable wavelet video codec is proposed for Internet video streaming applications based on the simplified JPEG-2000 compression core. Most existing video coding solutions utilize a fixed temporal grouping structure, resulting in quality degradation due to structural mismatch with inherent motion and scene change. Thus, by adopting an adaptive frame grouping scheme based on fast scene change detection, a flexible temporal grouping is proposed according to motion activities. To provide good temporal scalability regardless of packet loss, the dependency structure inside a temporal group is simplified by referencing only the initial intra-frame in telescopic motion estimation at the cost of coding efficiency. In addition, predictive-frames in a temporal group are prioritized according to their relative motion and coding cost. Finally, the joint spatio-temporal scalability support of the proposed video solution is demonstrated in terms of the network adaptation capability.

  12. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  13. Feedback control of acoustic musical instruments: collocated control using physical analogs.

    PubMed

    Berdahl, Edgar; Smith, Julius O; Niemeyer, Günter

    2012-01-01

    Traditionally, the average professional musician has owned numerous acoustic musical instruments, many of them having distinctive acoustic qualities. However, a modern musician could prefer to have a single musical instrument whose acoustics are programmable by feedback control, where acoustic variables are estimated from sensor measurements in real time and then fed back in order to influence the controlled variables. In this paper, theory is presented that describes stable feedback control of an acoustic musical instrument. The presentation should be accessible to members of the musical acoustics community who may have limited or no experience with feedback control. First, the only control strategy guaranteed to be stable subject to any musical instrument mobility is described: the sensors and actuators must be collocated, and the controller must emulate a physical analog system. Next, the most fundamental feedback controllers and the corresponding physical analog systems are presented. The effects that these controllers have on acoustic musical instruments are described. Finally, practical design challenges are discussed. A proof explains why changing the resonance frequency of a musical resonance requires much more control power than changing the decay time of the resonance.

  14. Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, Siebren

    2016-08-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  15. Evaluation of Direct Collocation Optimal Control Problem Formulations for Solving the Muscle Redundancy Problem.

    PubMed

    De Groote, Friedl; Kinney, Allison L; Rao, Anil V; Fregly, Benjamin J

    2016-10-01

    Estimation of muscle forces during motion involves solving an indeterminate problem (more unknown muscle forces than joint moment constraints), frequently via optimization methods. When the dynamics of muscle activation and contraction are modeled for consistency with muscle physiology, the resulting optimization problem is dynamic and challenging to solve. This study sought to identify a robust and computationally efficient formulation for solving these dynamic optimization problems using direct collocation optimal control methods. Four problem formulations were investigated for walking based on both a two and three dimensional model. Formulations differed in the use of either an explicit or implicit representation of contraction dynamics with either muscle length or tendon force as a state variable. The implicit representations introduced additional controls defined as the time derivatives of the states, allowing the nonlinear equations describing contraction dynamics to be imposed as algebraic path constraints, simplifying their evaluation. Problem formulation affected computational speed and robustness to the initial guess. The formulation that used explicit contraction dynamics with muscle length as a state failed to converge in most cases. In contrast, the two formulations that used implicit contraction dynamics converged to an optimal solution in all cases for all initial guesses, with tendon force as a state generally being the fastest. Future work should focus on comparing the present approach to other approaches for computing muscle forces. The present approach lacks some of the major limitations of established methods such as static optimization and computed muscle control while remaining computationally efficient.

  16. Parallel iterative solution of the Hermite Collocation equations on GPUs II

    NASA Astrophysics Data System (ADS)

    Vilanakis, N.; Mathioudakis, E.

    2014-03-01

    Hermite Collocation is a high order finite element method for Boundary Value Problems modelling applications in several fields of science and engineering. Application of this integration free numerical solver for the solution of linear BVPs results in a large and sparse general system of algebraic equations, suggesting the usage of an efficient iterative solver especially for realistic simulations. In part I of this work an efficient parallel algorithm of the Schur complement method coupled with Bi-Conjugate Gradient Stabilized (BiCGSTAB) iterative solver has been designed for multicore computing architectures with a Graphics Processing Unit (GPU). In the present work the proposed algorithm has been extended for high performance computing environments consisting of multiprocessor machines with multiple GPUs. Since this is a distributed GPU and shared CPU memory parallel architecture, a hybrid memory treatment is needed for the development of the parallel algorithm. The realization of the algorithm took place on a multiprocessor machine HP SL390 with Tesla M2070 GPUs using the OpenMP and OpenACC standards. Execution time measurements reveal the efficiency of the parallel implementation.

  17. Twomey effect observed from collocated microphysical and remote sensing measurements over shallow cumulus

    NASA Astrophysics Data System (ADS)

    Werner, F.; Ditas, F.; Siebert, H.; Simmel, M.; Wehner, B.; Pilewskie, P.; Schmeissner, T.; Shaw, R. A.; Hartmann, S.; Wex, H.; Roberts, G. C.; Wendisch, M.

    2014-02-01

    Clear experimental evidence of the Twomey effect for shallow trade wind cumuli near Barbados is presented. Effective droplet radius (reff) and cloud optical thickness (τ), retrieved from helicopter-borne spectral cloud-reflected radiance measurements, and spectral cloud reflectivity (γλ) are correlated with collocated in situ observations of the number concentration of aerosol particles from the subcloud layer (N). N denotes the concentration of particles larger than 80 nm in diameter and represents particles in the activation mode. In situ cloud microphysical and aerosol parameters were sampled by the Airborne Cloud Turbulence Observation System (ACTOS). Spectral cloud-reflected radiance data were collected by the Spectral Modular Airborne Radiation measurement sysTem (SMART-HELIOS). With increasing N a shift in the probability density functions of τ and γλ toward larger values is observed, while the mean values and observed ranges of retrieved reff decrease. The relative susceptibilities (RS) of reff, τ, and γλ to N are derived for bins of constant liquid water path. The resulting values of RS are in the range of 0.35 for reff and τ, and 0.27 for γλ. These results are close to the maximum susceptibility possible from theory. Overall, the shallow cumuli sampled near Barbados show characteristics of homogeneous, plane-parallel clouds. Comparisons of RS derived from in situ measured reff and from a microphysical parcel model are in close agreement.

  18. Optimal momentum interpolation operators for simulations of turbulence with the incompressible collocated mesh scheme

    NASA Astrophysics Data System (ADS)

    Felten, Frederic; Lund, Thomas

    2001-11-01

    The incompressible collocated mesh is often preferred over the staggered mesh scheme for turbulence simulation due to its slightly simpler form in curvilinear coordinates. Many researchers have used an upwind interpolation for the momentum, citing problems with numerical oscillations if centered interpolations are used. Analysis reveals that second order centered interpolations result in a kinetic energy conservation error, which can act as a source for numerical oscillations. Analysis also shows that a simple first order centered interpolation does not produce a kinetic energy conservation error. Various momentum interpolation operators are used in an inviscid simulation of the flow over an airfoil, as well as for simulations of turbulent channel flow. In the case of the airfoil, oscillations are present with the second order centered interpolation, but are absent for both the first order centered and the second order upwind schemes. The dissipative effects of the upwind interpolations degrade the results of the channel flow simulations, while both the first and second order centered interpolations yield good results. This work suggests that numerical oscillations can be controlled with a non-dissipative algorithm through the proper choice of the interpolation scheme.

  19. A boundary collocation meshfree method for the treatment of Poisson problems with complex morphologies

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Mai, Weijie; Liang, Bowen; Buchheit, Rudolph G.

    2015-01-01

    A new meshfree method based on a discrete transformation of Green's basis functions is introduced to simulate Poisson problems with complex morphologies. The proposed Green's Discrete Transformation Method (GDTM) uses source points that are located along a virtual boundary outside the problem domain to construct the basis functions needed to approximate the field. The optimal number of Green's functions source points and their relative distances with respect to the problem boundaries are evaluated to obtain the best approximation of the partition of unity condition. A discrete transformation technique together with the boundary point collocation method is employed to evaluate the unknown coefficients of the solution series via satisfying the problem boundary conditions. A comprehensive convergence study is presented to investigate the accuracy and convergence rate of the GDTM. We will also demonstrate the application of this meshfree method for simulating the conductive heat transfer in a heterogeneous materials system and the dissolved aluminum ions concentration in the electrolyte solution formed near a passive corrosion pit.

  20. Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.

    PubMed

    Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei

    2014-02-01

    Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods. PMID:26270906