Science.gov

Sample records for wavelet-based density estimation

  1. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  2. Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation

    NASA Astrophysics Data System (ADS)

    Lei, Sheau-Fang; Tung, Ying-Kai

    Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.

  3. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  4. Measuring mass density and ultrasonic wave velocity: A wavelet-based method applied in ultrasonic reflection mode.

    PubMed

    Metwally, Khaled; Lefevre, Emmanuelle; Baron, Cécile; Zheng, Rui; Pithioux, Martine; Lasaygues, Philippe

    2016-02-01

    When assessing ultrasonic measurements of material parameters, the signal processing is an important part of the inverse problem. Measurements of thickness, ultrasonic wave velocity and mass density are required for such assessments. This study investigates the feasibility and the robustness of a wavelet-based processing (WBP) method based on a Jaffard-Meyer algorithm for calculating these parameters simultaneously and independently, using one single ultrasonic signal in the reflection mode. The appropriate transmitted incident wave, correlated with the mathematical properties of the wavelet decomposition, was determined using a adapted identification procedure to build a mathematically equivalent model for the electro-acoustic system. The method was tested on three groups of samples (polyurethane resin, bone and wood) using one 1-MHz transducer. For thickness and velocity measurements, the WBP method gave a relative error lower than 1.5%. The relative errors in the mass density measurements ranged between 0.70% and 2.59%. Despite discrepancies between manufactured and biological samples, the results obtained on the three groups of samples using the WBP method in the reflection mode were remarkably consistent, indicating that it is a reliable and efficient means of simultaneously assessing the thickness and the velocity of the ultrasonic wave propagating in the medium, and the apparent mass density of material. PMID:26403278

  5. Fetal QRS detection and heart rate estimation: a wavelet-based approach.

    PubMed

    Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

    2014-08-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. PMID:25070210

  6. Real-time wavelet based blur estimation on cell BE platform

    NASA Astrophysics Data System (ADS)

    Lukic, Nemanja; Platiša, Ljiljana; Pižurica, Aleksandra; Philips, Wilfried; Temerinac, Miodrag

    2010-01-01

    We propose a real-time system for blur estimation using wavelet decomposition. The system is based on an emerging multi-core microprocessor architecture (Cell Broadband Engine, Cell BE) known to outperform any available general purpose or DSP processor in the domain of real-time advanced video processing solutions. We start from a recent wavelet domain blur estimation algorithm which uses histograms of a local regularity measure called average cone ratio (ACR). This approach has shown a very good potential for assessing the level of blur in the image yet some important aspects remain to be addressed in order for the method to become a practically working one. Some of these aspects are explored in our work. Furthermore, we develop an efficient real-time implementation of the novelty metric and integrate it into a system that captures live video. The proposed system estimates blur extent and renders the results to the remote user in real-time.

  7. Direct Density Derivative Estimation.

    PubMed

    Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi

    2016-06-01

    Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943

  8. Information geometric density estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Marchand-Maillet, Stéphane

    2015-01-01

    We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.

  9. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  10. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  11. Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density p, allowing us to obtain the natural non-negative density representation (p)2. Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. PMID:18390355

  12. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  13. A family of orthonormal wavelet bases with dilation factor 4

    NASA Astrophysics Data System (ADS)

    Karoui, Abderrazek

    2006-05-01

    In this paper, we study a method for the construction of orthonormal wavelet bases with dilation factor 4. More precisely, for any integer M>0, we construct an orthonormal scaling filter mM([xi]) that generates a mother scaling function [phi]M, associated with the dilation factor 4. The computation of the different coefficients of mM([xi])2 is done by the use of a simple iterative method. Also, this work shows how this construction method provides us with a whole family of compactly supported orthonormal wavelet bases with arbitrary high regularity. A first estimate of [alpha](M), the asymptotic regularity of [phi]M is given by [alpha](M)~0.25M. Examples are provided to illustrate the results of this work.

  14. Estimation of coastal density gradients

    NASA Astrophysics Data System (ADS)

    Howarth, M. J.; Palmer, M. R.; Polton, J. A.; O'Neill, C. K.

    2012-04-01

    Density gradients in coastal regions with significant freshwater input are large and variable and are a major control of nearshore circulation. However their measurement is difficult, especially where the gradients are largest close to the coast, with significant uncertainties because of a variety of factors - spatial and time scales are small, tidal currents are strong and water depths shallow. Whilst temperature measurements are relatively straightforward, measurements of salinity (the dominant control of spatial variability) can be less reliable in turbid coastal waters. Liverpool Bay has strong tidal mixing and receives fresh water principally from the Dee, Mersey, Ribble and Conwy estuaries, each with different catchment influences. Horizontal and vertical density gradients are variable both in space and time. The water column stratifies intermittently. A Coastal Observatory has been operational since 2002 with regular (quasi monthly) CTD surveys on a 9 km grid, an situ station, an instrumented ferry travelling between Birkenhead and Dublin and a shore-based HF radar system measuring surface currents and waves. These measurements are complementary, each having different space-time characteristics. For coastal gradients the ferry is particularly useful since measurements are made right from the mouth of Mersey. From measurements at the in situ site alone density gradients can only be estimated from the tidal excursion. A suite of coupled physical, wave and ecological models are run in association with these measurements. The models, here on a 1.8 km grid, enable detailed estimation of nearshore density gradients, provided appropriate river run-off data are available. Examples are presented of the density gradients estimated from the different measurements and models, together with accuracies and uncertainties, showing that systematic time series measurements within a few kilometres of the coast are a high priority. (Here gliders are an exciting prospect for

  15. Wavelet based recognition for pulsar signals

    NASA Astrophysics Data System (ADS)

    Shan, H.; Wang, X.; Chen, X.; Yuan, J.; Nie, J.; Zhang, H.; Liu, N.; Wang, N.

    2015-06-01

    A signal from a pulsar can be decomposed into a set of features. This set is a unique signature for a given pulsar. It can be used to decide whether a pulsar is newly discovered or not. Features can be constructed from coefficients of a wavelet decomposition. Two types of wavelet based pulsar features are proposed. The energy based features reflect the multiscale distribution of the energy of coefficients. The singularity based features first classify the signals into a class with one peak and a class with two peaks by exploring the number of the straight wavelet modulus maxima lines perpendicular to the abscissa, and then implement further classification according to the features of skewness and kurtosis. Experimental results show that the wavelet based features can gain comparatively better performance over the shape parameter based features not only in the clustering and classification, but also in the error rates of the recognition tasks.

  16. Wavelets based on Hermite cubic splines

    NASA Astrophysics Data System (ADS)

    Cvejnová, Daniela; Černá, Dana; Finěk, Václav

    2016-06-01

    In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.

  17. Image denoising via Bayesian estimation of local variance with Maxwell density prior

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-10-01

    The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.

  18. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  19. Wavelet-based ultrasound image denoising: performance analysis and comparison.

    PubMed

    Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

  20. Hydrologic regionalization using wavelet-based multiscale entropy method

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Maheswaran, R.; Sehgal, V.; Khosa, R.; Sivakumar, B.; Bernhofer, C.

    2016-07-01

    Catchment regionalization is an important step in estimating hydrologic parameters of ungaged basins. This paper proposes a multiscale entropy method using wavelet transform and k-means based hybrid approach for clustering of hydrologic catchments. Multi-resolution wavelet transform of a time series reveals structure, which is often obscured in streamflow records, by permitting gross and fine features of a signal to be separated. Wavelet-based Multiscale Entropy (WME) is a measure of randomness of the given time series at different timescales. In this study, streamflow records observed during 1951-2002 at 530 selected catchments throughout the United States are used to test the proposed regionalization framework. Further, based on the pattern of entropy across multiple scales, each cluster is given an entropy signature that provides an approximation of the entropy pattern of the streamflow data in each cluster. The tests for homogeneity reveals that the proposed approach works very well in regionalization.

  1. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  2. Wavelet-based approach to character skeleton.

    PubMed

    You, Xinge; Tang, Yuan Yan

    2007-05-01

    Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform. PMID:17491454

  3. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

  4. ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

    EPA Science Inventory

    An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

  5. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106

  6. A wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Wang, Yuan 1Ji, Yanju 2Li, Suyi 13Lin, Jun 12Zhou, Fengdao 1Yang, Guihong

    2013-09-01

    A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an

  7. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds

  8. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds

  9. Density Estimation for Projected Exoplanet Quantities

    NASA Astrophysics Data System (ADS)

    Brown, Robert A.

    2011-05-01

    Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of "projected" mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth σ. We calibrate the dependence of σ on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n -log 2. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements—particularly on the minimum sample size n—for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

  10. Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints

    SciTech Connect

    I. W. Ginsberg

    1999-09-01

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.

  11. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  12. Density Estimation Framework for Model Error Assessment

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.

    2014-12-01

    In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.

  13. Directional wavelet based features for colonic polyp classification.

    PubMed

    Wimmer, Georg; Tamaki, Toru; Tischendorf, J J W; Häfner, Michael; Yoshida, Shigeto; Tanaka, Shinji; Uhl, Andreas

    2016-07-01

    In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state

  14. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant

  15. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  16. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  17. Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.

    PubMed

    Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi

    2016-07-01

    Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering. PMID:27171983

  18. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  19. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  20. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  1. Density Estimations in Laboratory Debris Flow Experiments

    NASA Astrophysics Data System (ADS)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Malcherek, Andreas; Fischer, Jan-Thomas; Pudasaini, Shiva P.

    2016-04-01

    Bulk density and its variation is an important physical quantity to estimate the solid-liquid fractions in two-phase debris flows. Here we present mass and flow depth measurements for experiments performed in a large-scale laboratory set up. Once the mixture is released and it moves down the inclined channel, measurements allow us to determine the bulk density evolution throughout the debris flow. Flow depths are determined by ultrasonic pulse reflection, and the mass is measured with a total normal force sensor. The data were obtained at 50 Hz. The initial two phase material was composed of 350 kg debris with water content of 40%. A very fine pebble with mean particle diameter of 3 mm, particle density of 2760 kg/m³ and bulk density of 1400 kg/m³ in dry condition was chosen as the solid material. Measurements reveal that the debris bulk density remains high from the head to the middle of the debris body whereas it drops substantially at the tail. This indicates lower water content at the tail, compared to the head and the middle portion of the debris body. This means that the solid and fluid fractions are varying strongly in a non-linear manner along the flow path, and from the head to the tail of the debris mass. Importantly, this spatial-temporal density variation plays a crucial role in determining the impact forces associated with the dynamics of the flow. Our setup allows for investigating different two phase material compositions, including large fluid fractions, with high resolutions. The considered experimental set up may enable us to transfer the observed phenomena to natural large-scale events. Furthermore, the measurement data allows evaluating results of numerical two-phase mass flow simulations. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural

  2. Wavelet-based verification of the quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  3. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  4. Review of some results in bivariate density estimation

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1982-01-01

    Results are reviewed for choosing smoothing parameters for some bivariate density estimators. Experience gained in comparing the effects of smoothing parameters on probability density estimators for univariate and bivariate data is summarized.

  5. ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER

    EPA Science Inventory

    This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...

  6. Traffic characterization and modeling of wavelet-based VBR encoded video

    SciTech Connect

    Yu Kuo; Jabbari, B.; Zafar, S.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  7. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  8. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  9. Characterizing cerebrovascular dynamics with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Sindeev, S. S.; Pavlova, O. N.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.

    2016-01-01

    Using the wavelet-transform modulus maxima (WTMM) approach we study the dynamics of cerebral blood flow (CBF) in rats aiming to reveal responses of macro- and microcerebral circulations to changes in the peripheral blood pressure. We show that the wavelet-based multifractal formalism allows quantifying essentially different reactions in the CBF-dynamics at the level of large and small cerebral vessels. We conclude that unlike the macrocirculation that is nearly insensitive to increased peripheral blood pressure, the microcirculation is characterized by essential changes of the CBF-complexity.

  10. Perceptually lossless wavelet-based compression for medical images

    NASA Astrophysics Data System (ADS)

    Lin, Nai-wen; Yu, Tsaifa; Chan, Andrew K.

    1997-05-01

    In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.

  11. Template-free wavelet-based detection of local symmetries.

    PubMed

    Puspoki, Zsuzsanna; Unser, Michael

    2015-10-01

    Our goal is to detect and group different kinds of local symmetries in images in a scale- and rotation-invariant way. We propose an efficient wavelet-based method to determine the order of local symmetry at each location. Our algorithm relies on circular harmonic wavelets which are used to generate steerable wavelet channels corresponding to different symmetry orders. To give a measure of local symmetry, we use the F-test to examine the distribution of the energy across different channels. We provide experimental results on synthetic images, biological micrographs, and electron-microscopy images to demonstrate the performance of the algorithm. PMID:26011883

  12. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027

  13. Wavelet based characterization of ex vivo vertebral trabecular bone structure with 3T MRI compared to microCT

    SciTech Connect

    Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

    2005-04-11

    Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

  14. Concrete density estimation by rebound hammer method

    NASA Astrophysics Data System (ADS)

    Ismail, Mohamad Pauzi bin; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin

    2016-01-01

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite.

  15. Wavelet-based moment invariants for pattern recognition

    NASA Astrophysics Data System (ADS)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  16. A New Wavelet Based Approach to Assess Hydrological Models

    NASA Astrophysics Data System (ADS)

    Adamowski, J. F.; Rathinasamy, M.; Khosa, R.; Nalley, D.

    2014-12-01

    In this study, a new wavelet based multi-scale performance measure (Multiscale Nash Sutcliffe Criteria, and Multiscale Normalized Root Mean Square Error) for hydrological model comparison was developed and tested. The new measure provides a quantitative measure of model performance across different timescales. Model and observed time series are decomposed using the a trous wavelet transform, and performance measures of the model are obtained at each time scale. The usefulness of the new measure was tested using real as well as synthetic case studies. The real case studies included simulation results from the Soil Water Assessment Tool (SWAT), as well as statistical models (the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods). Data from India and Canada were used. The synthetic case studies included different kinds of errors (e.g., timing error, as well as under and over prediction of high and low flows) in outputs from a hydrologic model. It was found that the proposed wavelet based performance measures (i.e., MNSC and MNRMSE) are a more reliable measure than traditional performance measures such as the Nash Sutcliffe Criteria, Root Mean Square Error, and Normalized Root Mean Square Error. It was shown that the new measure can be used to compare different hydrological models, as well as help in model calibration.

  17. A wavelet-based approach to fall detection.

    PubMed

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the "prototype fall".In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  18. A Wavelet-Based Approach to Fall Detection

    PubMed Central

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  19. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  20. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  1. Wavelet-based multifractal analysis of laser biopsy imagery

    NASA Astrophysics Data System (ADS)

    Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima

    2012-03-01

    In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.

  2. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  3. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  4. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  5. 2D wavelet transform with different adaptive wavelet bases for texture defect inspection based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Mo, Yu L.

    1998-08-01

    There are many textures such as woven fabrics having repeating Textron. In order to handle the textural characteristics of images with defects, this paper proposes a new method based on 2D wavelet transform. In the method, a new concept of different adaptive wavelet bases is used to match the texture pattern. The 2D wavelet transform has two different adaptive orthonormal wavelet bases for rows and columns which differ from Daubechies wavelet bases. The orthonormal wavelet bases for rows and columns are generated by genetic algorithm. The experiment result demonstrate the ability of the different adaptive wavelet bases to characterize the texture and locate the defects in the texture.

  6. Estimating Geometric Dislocation Densities in Polycrystalline Materialsfrom Orientation Imaging Microscopy

    SciTech Connect

    Man, Chi-Sing; Gao, Xiang; Godefroy, Scott; Kenik, Edward A

    2010-01-01

    Herein we consider polycrystalline materials which can be taken as statistically homogeneous and whose grains can be adequately modeled as rigid-plastic. Our objective is to obtain, from orientation imaging microscopy (OIM), estimates of geometrically necessary dislocation (GND) densities.

  7. Atmospheric density estimation using satellite precision orbit ephemerides

    NASA Astrophysics Data System (ADS)

    Arudra, Anoop Kumar

    The current atmospheric density models are not capable enough to accurately model the atmospheric density, which varies continuously in the upper atmosphere mainly due to the changes in solar and geomagnetic activity. Inaccurate atmospheric modeling results in erroneous density values that are not accurate enough to calculate the drag estimates acting on a satellite, thus leading to errors in the prediction of satellite orbits. This research utilized precision orbit ephemerides (POE) data from satellites in an orbit determination process to make corrections to existing atmospheric models, thus resulting in improved density estimates. The work done in this research made corrections to the Jacchia family atmospheric models and Mass Spectrometer Incoherent Scatter (MSIS) family atmospheric models using POE data from the Ice, Cloud and Land Elevation Satellite (ICESat) and the Terra Synthetic Aperture Radar-X Band (TerraSAR-X) satellite. The POE data obtained from these satellites was used in an orbit determination scheme which performs a sequential filter/smoother process to the measurements and generates corrections to the atmospheric models to estimate density. This research considered several days from the year 2001 to 2008 encompassing all levels of solar and geomagnetic activity. Density and ballistic coefficient half-lives with values of 1.8, 18, and 180 minutes were used in this research to observe the effect of these half-life combinations on density estimates. This research also examined the consistency of densities derived from the accelerometers of the Challenging Mini Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites by Eric Sutton, from the University of Colorado. The accelerometer densities derived by Sutton were compared with those derived by Sean Bruinsma from CNES, Department of Terrestrial and Planetary Geodesy, France. The Sutton densities proved to be nearly identical to the Bruinsma densities for all the

  8. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  9. Complex wavelet based speckle reduction using multiple ultrasound images

    NASA Astrophysics Data System (ADS)

    Uddin, Muhammad Shahin; Tahtali, Murat; Pickering, Mark R.

    2014-04-01

    Ultrasound imaging is a dominant tool for diagnosis and evaluation in medical imaging systems. However, as its major limitation is that the images it produces suffer from low quality due to the presence of speckle noise, to provide better clinical diagnoses, reducing this noise is essential. The key purpose of a speckle reduction algorithm is to obtain a speckle-free high-quality image whilst preserving important anatomical features, such as sharp edges. As this can be better achieved using multiple ultrasound images rather than a single image, we introduce a complex wavelet-based algorithm for the speckle reduction and sharp edge preservation of two-dimensional (2D) ultrasound images using multiple ultrasound images. The proposed algorithm does not rely on straightforward averaging of multiple images but, rather, in each scale, overlapped wavelet detail coefficients are weighted using dynamic threshold values and then reconstructed by averaging. Validation of the proposed algorithm is carried out using simulated and real images with synthetic speckle noise and phantom data consisting of multiple ultrasound images, with the experimental results demonstrating that speckle noise is significantly reduced whilst sharp edges without discernible distortions are preserved. The proposed approach performs better both qualitatively and quantitatively than previous existing approaches.

  10. A wavelet-based feature vector model for DNA clustering.

    PubMed

    Bao, J P; Yuan, R Y

    2015-01-01

    DNA data are important in the bioinformatic domain. To extract useful information from the enormous collection of DNA sequences, DNA clustering is often adopted to efficiently deal with DNA data. The alignment-free method is a very popular way of creating feature vectors from DNA sequences, which are then used to compare DNA similarities. This paper proposes a wavelet-based feature vector (WFV) model, which is also an alignment-free method. From the perspective of signal processing, a DNA sequence is a sequence of digital signals. However, most traditional alignment-free models only extract features in the time domain. The WFV model uses discrete wavelet transform to adaptively yield feature vectors with a fixed dimension based on the features in both the time and frequency domains. The level of wavelet transform is adjusted according to the length of the DNA sequence rather than a fixed manually set value. The WFV model prefers a 32-dimension feature vector, which greatly promotes system performance. We compared the WFV model with the other five alignment-free models, i.e., k-tuple, DMK, TSM, AMI, and CV, on several large-scale DNA datasets on the DNA clustering application by means of the K-means algorithm. The experimental results showed that the WFV model outperformed the other models in terms of both the clustering results and the running time. PMID:26782569

  11. Wavelet-based face verification for constrained platforms

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2005-03-01

    Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.

  12. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  13. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  14. Coarse-to-fine wavelet-based airport detection

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun

    2015-10-01

    Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.

  15. Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

    NASA Astrophysics Data System (ADS)

    Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

    2011-05-01

    Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

  16. Wavelet-based laser-induced ultrasonic inspection in pipes

    NASA Astrophysics Data System (ADS)

    Baltazar-López, Martín E.; Suh, Steve; Chona, Ravinder; Burger, Christian P.

    2006-02-01

    The feasibility of detecting localized defects in tubing using Wavelet based laser-induced ultrasonic-guided waves as an inspection method is examined. Ultrasonic guided waves initiated and propagating in hollow cylinders (pipes and/or tubes) are studied as an alternative, robust nondestructive in situ inspection method. Contrary to other traditional methods for pipe inspection, in which contact transducers (electromagnetic, piezoelectric) and/or coupling media (submersion liquids) are used, this method is characterized by its non-contact nature. This characteristic is particularly important in applications involving Nondestructive Evaluation (NDE) of materials because the signal being detected corresponds only to the induced wave. Cylindrical guided waves are generated using a Q-switched Nd:YAG laser and a Fiber Tip Interferometry (FTI) system is used to acquire the waves. Guided wave experimental techniques are developed for the measurement of phase velocities to determine elastic properties of the material and the location and geometry of flaws including inclusions, voids, and cracks in hollow cylinders. As compared to the traditional bulk wave methods, the use of guided waves offers several important potential advantages. Some of which includes better inspection efficiency, the applicability to in-situ tube inspection, and fewer evaluation fluctuations with increased reliability.

  17. Mean thermospheric density estimation derived from satellite constellations

    NASA Astrophysics Data System (ADS)

    Li, Alan; Close, Sigrid

    2015-10-01

    This paper defines a method to estimate the mean neutral density of the thermosphere given many satellites of the same form factor travelling in similar regions of space. A priori information to the estimation scheme include ranging measurements and a general knowledge of the onboard ADACS, although precise measurements are not required for the latter. The estimation procedure seeks to utilize order statistics to estimate the probability of the minimum drag coefficient achievable, and amalgamating all measurements across multiple time periods allows estimation of the probability density of the ballistic factor itself. The model does not depend on prior models of the atmosphere; instead we require estimation of the minimum achievable drag coefficient which is based upon physics models of simple shapes in free molecular flow. From the statistics of the minimum, error statistics on the estimated atmospheric density can be calculated. Barring measurement errors from the ranging procedure itself, it is shown that with a constellation of 10 satellites, we can achieve a standard deviation of roughly 4% on the estimated mean neutral density. As more satellites are added to the constellation, the result converges towards the lower limit of the achievable drag coefficient, and accuracy becomes limited by the quality of the ranging measurements and the probability of the accommodation coefficient. Comparisons are made to existing atmospheric models such as NRLMSISE-00 and JB2006.

  18. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  19. The estimation of body density in rugby union football players.

    PubMed Central

    Bell, W

    1995-01-01

    The general regression equation of Durnin and Womersley for estimating body density from skinfold thicknesses in young men, was examined by comparing the estimated density from this equation, with the measured density of a group of 45 rugby union players of similar age. Body density was measured by hydrostatic weighing with simultaneous measurement of residual volume. Additional measurements included stature, body mass and skinfold thicknesses at the biceps, triceps, subscapular and suprailiac sites. The estimated density was significantly different from the measured density (P < 0.001), equivalent to a mean overestimation of relative fat of approximately 4%. A new set of prediction equations for estimating density was formulated from linear regression using the logarithm of single and sums of skinfold thicknesses. Equations were derived from a validation sample (n = 22) and tested on a crossvalidation sample (n = 23). The standard error of the estimate (s.e.e.) of the equations ranged from 0.0058 to 0.0062 g ml-1. The derived equations were successfully crossvalidated. Differences between measured and estimated densities were not significant (P > 0.05), total errors ranging from 0.0067 to 0.0092 g ml-1. An exploratory assessment was also made of the effect of fatness and aerobic fitness on the prediction equations. The equations should be applied to players of similar age and playing ability, and for the purpose of identifying group characteristics. Application of the equations to individuals may give rise to errors of between -3.9% to +2.5% total body fat in two-thirds of cases. PMID:7788218

  20. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  1. Wavelet-based ground vehicle recognition using acoustic signals

    NASA Astrophysics Data System (ADS)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  2. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  3. A wavelet-based approach to face verification/recognition

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah; Sellahewa, Harin

    2005-10-01

    Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

  4. Wavelet-based multicomponent matching pursuit trace interpolation

    NASA Astrophysics Data System (ADS)

    Choi, Jihun; Byun, Joongmoo; Seol, Soon Jee; Kim, Young

    2016-09-01

    Typically, seismic data are sparsely and irregularly sampled due to limitations in the survey environment and these cause problems for key seismic processing steps such as surface-related multiple elimination or wave-equation-based migration. Various interpolation techniques have been developed to alleviate the problems caused by sparse and irregular sampling. Among many interpolation techniques, matching pursuit interpolation is a robust tool to interpolate the regularly sampled data with large receiver separation such as crossline data in marine seismic acquisition when both pressure and particle velocity data are used. Multicomponent matching pursuit methods generally used the sinusoidal basis function, which have shown to be effective for interpolating multicomponent marine seismic data in the crossline direction. In this paper, we report the use of wavelet basis functions which further enhances the performance of matching pursuit methods for de-aliasing than sinusoidal basis functions. We also found that the range of the peak wavenumber of the wavelet is critical to the stability of the interpolation results and the de-aliasing performance and that the range should be determined based on Nyquist criteria. In addition, we reduced the computational cost by adopting the inner product of the wavelet and the input data to find the parameters of the wavelet basis function instead of using L-2 norm minimization. Using synthetic data, we illustrate that for aliased data, wavelet-based matching pursuit interpolation yields more stable results than sinusoidal function-based one when we use not only pressure data only but also both pressure and particle velocity together.

  5. Wavelet-based multicomponent matching pursuit trace interpolation

    NASA Astrophysics Data System (ADS)

    Choi, Jihun; Byun, Joongmoo; Seol, Soon Jee; Kim, Young

    2016-06-01

    Typically, seismic data are sparsely and irregularly sampled due to limitations in the survey environment and these cause problems for key seismic processing steps such as surface-related multiple elimination or wave-equation based migration. Various interpolation techniques have been developed to alleviate the problems caused by sparse and irregular sampling. Among many interpolation techniques, matching pursuit interpolation is a robust tool to interpolate the regularly sampled data with large receiver separation such as crossline data in marine seismic acquisition when both pressure and particle velocity data are used. Multi-component matching pursuit methods generally used the sinusoidal basis function, which have shown to be effective for interpolating multi-component marine seismic data in the crossline direction. In this paper, we report the use of wavelet basis functions which further enhances the performance of matching pursuit methods for de-aliasing than sinusoidal basis functions. We also found that the range of the peak wavenumber of the wavelet is critical to the stability of the interpolation results and the de-aliasing performance and that the range should be determined based on Nyquist criteria. In addition, we reduced the computational cost by adopting the inner product of the wavelet and the input data to find the parameters of the wavelet basis function instead of using L-2 norm minimization. Using synthetic data, we illustrate that for aliased data, wavelet-based matching pursuit interpolation yields more stable results than sinusoidal function-based one when we use not only pressure data only but also both pressure and particle velocity together.

  6. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  7. Atmospheric Density Corrections Estimated from Fitted Drag Coefficients

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T. F.; Mance, S. R.; Mehta, P.

    2010-12-01

    Fitted drag coefficients estimated using GEODYN, the NASA Goddard Space Flight Center Precision Orbit Determination and Geodetic Parameter Estimation Program, are used to create density corrections. The drag coefficients were estimated for Stella, Starlette and GFZ using satellite laser ranging (SLR) measurements; and for GEOSAT Follow-On (GFO) using SLR, Doppler, and altimeter crossover measurements. The data analyzed covers years ranging from 2000 to 2004 for Stella and Starlette, 2000 to 2002 and 2005 for GFO, and 1995 to 1997 for GFZ. The drag coefficient was estimated every eight hours. The drag coefficients over the course of a year show a consistent variation about the theoretical and yearly average values that primarily represents a semi-annual/seasonal error in the atmospheric density models used. The atmospheric density models examined were NRLMSISE-00 and MSIS-86. The annual structure of the major variations was consistent among all the satellites for a given year and consistent among all the years examined. The fitted drag coefficients can be converted into density corrections every eight hours along the orbit of the satellites. In addition, drag coefficients estimated more frequently can provide a higher frequency of density correction.

  8. NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES

    PubMed Central

    Seregin, Arseni; Wellner, Jon A.

    2011-01-01

    We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ∈ ℝd and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e−y for y ∈ ℝ; in this case, the resulting class of densities P(e−y)={p=exp(−g):gis convex}is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p̂ exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e−y) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x0 under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877

  9. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the

  10. Quantiles, parametric-select density estimation, and bi-information parameter estimators

    NASA Technical Reports Server (NTRS)

    Parzen, E.

    1982-01-01

    A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.

  11. A new approach for estimating the density of liquids.

    PubMed

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-01

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed. PMID:27494268

  12. An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments

    PubMed Central

    Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro

    2013-01-01

    In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

  13. ENSO forecast using a wavelet-based decomposition

    NASA Astrophysics Data System (ADS)

    Deliège, Adrien; Nicolay, Samuel; Fettweis, Xavier

    2015-04-01

    The aim of this work is to introduce a new method for forecasting major El Niño/ La Niña events with the use of a wavelet-based mode decomposition. These major events are related to sea surface temperature anomalies in the tropical Pacific Ocean: anomalous warmings are known as El Niño events, while excessive coolings are referred as La Niña episodes. These climatological phenomena are of primary importance since they are involved in many teleconnections; predicting them long before they occur is therefore a crucial concern. First, we perform a wavelet transform (WT) of the monthly sampled El Niño Southern Oscillation 3.4 index (from 1950 to present) and compute the associated scale spectrum, which can be seen as the energy carried in the WT as a function of the scale. It can be observed that the spectrum reaches five peaks, corresponding to time scales of about 7, 20, 31, 43 and 61 months respectively. Therefore, the Niño 3.4 signal can be decomposed into five dominant oscillating components with time-varying amplitudes, these latter being given by the modulus of the WT at the associated pseudo-periods. The reconstruction of the index based on these five components is accurate since more than 93% of the El Niño/ La Niña events of the last 60 years are recovered and no major event is erroneously predicted. Then, the components are smoothly extrapolated using polynomials and added together, giving so several years forecasts of the Niño 3.4 index. In order to increase the reliability of the forecasts, we perform several months hindcasts (i.e. retroactive probing forecasts) which can be validated with the existing data. It turns out that most of the major events can be accurately predicted up to three years in advance, which makes our methodology competitive for such forecasts. Finally, we discuss the El Niño conditions currently undergone and give indications about the next La Niña event.

  14. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  15. Comparison of neuron selection algorithms of wavelet-based neural network

    NASA Astrophysics Data System (ADS)

    Mei, Xiaodan; Sun, Sheng-He

    2001-09-01

    Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.

  16. Multibaseline polarimetric synthetic aperture radar tomography of forested areas using wavelet-based distribution compressive sensing

    NASA Astrophysics Data System (ADS)

    Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong

    2015-01-01

    The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.

  17. Resolution enhancement of composite spectra using wavelet-based derivative spectrometry.

    PubMed

    Kharintsev, S S; Kamalova, D I; Salakhov, M Kh; Sevastianov, A A

    2005-01-01

    An approach based on the using of the continuous wavelet transform (CWT) in derivative spectrometry (DS) is considered. Within the framework of the approach we develop a numerical differentiation algorithm with continuous wavelets for improving resolution of composite spectra. The wavelet-based derivative spectrometry (WDS) method results in best contrast in differential curves compared to the conventional derivative spectrometry method. A main advantage is that, as opposed to DS, WDS gives stable estimations of derivative in the wavelet domain without using the regularization. A wavelet shape and the information redundancy are of the greatest importance when the continuous wavelet transform is used. As an appropriate wavelet we offer to utilize the nth derivative of a component with a priori known shape. The energy distribution into scales allows one to determine a unique wavelet projection and in that way to avoid the information redundancy. A comparative study of WDS and DS with the statistical regularization method (SRM) is made; in particular, limits of applicability of these are given. Examples of the application of both DS and WDS for improving resolution of synthetic composite bands and real-world composite ones coming from molecular spectroscopy are given. PMID:15556433

  18. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  19. Face Value: Towards Robust Estimates of Snow Leopard Densities

    PubMed Central

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682

  20. Density estimation in tiger populations: combining information for strong inference

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  1. Inversion Theorem Based Kernel Density Estimation for the Ordinary Least Squares Estimator of a Regression Coefficient

    PubMed Central

    Wang, Dongliang; Hutson, Alan D.

    2016-01-01

    The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882

  2. Ionospheric electron density profile estimation using commercial AM broadcast signals

    NASA Astrophysics Data System (ADS)

    Yu, De; Ma, Hong; Cheng, Li; Li, Yang; Zhang, Yufeng; Chen, Wenjun

    2015-08-01

    A new method for estimating the bottom electron density profile by using commercial AM broadcast signals as non-cooperative signals is presented in this paper. Without requiring any dedicated transmitters, the required input data are the measured elevation angles of signals transmitted from the known locations of broadcast stations. The input data are inverted for the QPS model parameters depicting the electron density profile of the signal's reflection area by using a probabilistic inversion technique. This method has been validated on synthesized data and used with the real data provided by an HF direction-finding system situated near the city of Wuhan. The estimated parameters obtained by the proposed method have been compared with vertical ionosonde data and have been used to locate the Shijiazhuang broadcast station. The simulation and experimental results indicate that the proposed ionospheric sounding method is feasible for obtaining useful electron density profiles.

  3. Density estimates for deep-sea gastropod assemblages

    NASA Astrophysics Data System (ADS)

    Rex, Michael A.; Etter, Ron J.; Nimeskern, Phillip W.

    1990-04-01

    Extensive boxcore sampling in the Atlantic Continental Slope and Rise study permitted the first precise measurement of gastropod density in the bathyal region of the deep sea. Gastropod density decreases significantly and exponentially with depth (250-3494 m), and density-depth regression lines do not differ significantly in either slope or elevatiob over horizontal scales of approximately 1000 km. The subclasses Prosobranchia and Opisthobranchia both show significant decreases in density with depth. Predatory taxa (neogastropods and opisthobranchs) exhibit significantly steeper declines in density with depth than do taxa dominated by deposit feeders (archaeogastropods and mesogastropods). Members of upper trophic levels may be more sensitive to the reduction in nutrient input with increased depth because of the energy loss between trophic levels in the food chain. A comparison of density estimates of gastropods from boxcore, grab and anchor-dredge samples taken in the same region revealed no significant differences in density-depth relationships among the sampling methods. A synthesis of data from 777 boxcore samples collected from the Atlantic, Caribbean and Pacific over a depth range of 250-7298 m indicates that the decline in gastropod density with depth is a global trend with only moderate inter-regional variation.

  4. Evaluating lidar point densities for effective estimation of aboveground biomass

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  5. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  6. Estimation of Enceladus Plume Density Using Cassini Flight Data

    NASA Technical Reports Server (NTRS)

    Wang, Eric K.; Lee, Allan Y.

    2011-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.

  7. Quantitative volumetric breast density estimation using phase contrast mammography

    NASA Astrophysics Data System (ADS)

    Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A.; D'Isidoro, Fabio; Stampanoni, Marco

    2015-05-01

    Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p=0.033 ) and AP view (p=0.001 ). A linear regression between the results of the CC view and AP view showed a correlation coefficient γ = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.

  8. Wavelet Based Analytical Expressions to Steady State Biofilm Model Arising in Biochemical Engineering.

    PubMed

    Padma, S; Hariharan, G

    2016-06-01

    In this paper, we have developed an efficient wavelet based approximation method to biofilm model under steady state arising in enzyme kinetics. Chebyshev wavelet based approximation method is successfully introduced in solving nonlinear steady state biofilm reaction model. To the best of our knowledge, until now there is no rigorous wavelet based solution has been addressed for the proposed model. Analytical solutions for substrate concentration have been derived for all values of the parameters δ and SL. The power of the manageable method is confirmed. Some numerical examples are presented to demonstrate the validity and applicability of the wavelet method. Moreover the use of Chebyshev wavelets is found to be simple, efficient, flexible, convenient, small computation costs and computationally attractive. PMID:26661721

  9. Scatterer Number Density Considerations in Reference Phantom Based Attenuation Estimation

    PubMed Central

    Rubert, Nicholas; Varghese, Tomy

    2014-01-01

    Attenuation estimation and imaging has the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and an SNR approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1,647 ROI's in 5 ex vivo bovine livers we find an envelope SNR of 1.10 ± 0.12 when imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article we examine attenuation estimation in numerical phantoms, TM phantoms with variable SND's, and ex vivo bovine liver prior to and following thermal coagulation. We find that reference phantom based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SND, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find the standard deviation of attenuation slope estimates increases from 0.07 dB/cm MHz to 0.25 dB/cm MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in TM phantoms with a large estimation kernel size (16 mm axially by 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (< 0.01 dB/cm MHz). We also compare results obtained with reference phantom based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800

  10. Can modeling improve estimation of desert tortoise population densities?

    USGS Publications Warehouse

    Nussear, K.E.; Tracy, C.R.

    2007-01-01

    The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

  11. Density Estimation of Comet 103P/Hartley 2

    NASA Astrophysics Data System (ADS)

    Bowling, T.; Richardson, J.; Melosh, J.; Thomas, P.

    2011-10-01

    Our analysis was constrained to the region of the neck that was directly imaged and well illuminated during the encounter. A homogeneous density and a rotation period of 18.34 hours are assumed. Only rotation about the principal axis was accounted for. The principal rotation period was likely shorter on timescales effective for surface modification [2]. Additionally, spin components about minor axes introduce a further degree of error. A global minimum is found for a bulk density ? = 220 kg m-3 (one sigma = 130-620 kg m-3) which corresponds to a comet mass of m = 1.84 x 1011 kg (one sigma = 1.51-5.18 x 1011 kg). This is lower than, but within error ranges of, previous comet density estimates (sec. 4.2 of [3]).

  12. Estimating black bear density using DNA data from hair snares

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  13. Density estimators in particle hydrodynamics. DTFE versus regular SPH

    NASA Astrophysics Data System (ADS)

    Pelupessy, F. I.; Schaap, W. E.; van de Weygaert, R.

    2003-05-01

    We present the results of a study comparing density maps reconstructed by the Delaunay Tessellation Field Estimator (DTFE) and by regular SPH kernel-based techniques. The density maps are constructed from the outcome of an SPH particle hydrodynamics simulation of a multiphase interstellar medium. The comparison between the two methods clearly demonstrates the superior performance of the DTFE with respect to conventional SPH methods, in particular at locations where SPH appears to fail. Filamentary and sheetlike structures form telling examples. The DTFE is a fully self-adaptive technique for reconstructing continuous density fields from discrete particle distributions, and is based upon the corresponding Delaunay tessellation. Its principal asset is its complete independence of arbitrary smoothing functions and parameters specifying the properties of these. As a result it manages to faithfully reproduce the anisotropies of the local particle distribution and through its adaptive and local nature proves to be optimally suited for uncovering the full structural richness in the density distribution. Through the improvement in local density estimates, calculations invoking the DTFE will yield a much better representation of physical processes which depend on density. This will be crucial in the case of feedback processes, which play a major role in galaxy and star formation. The presented results form an encouraging step towards the application and insertion of the DTFE in astrophysical hydrocodes. We describe an outline for the construction of a particle hydrodynamics code in which the DTFE replaces kernel-based methods. Further discussion addresses the issue and possibilities for a moving grid-based hydrocode invoking the DTFE, and Delaunay tessellations, in an attempt to combine the virtues of the Eulerian and Lagrangian approaches.

  14. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  15. Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds

    NASA Astrophysics Data System (ADS)

    Arshakian, Tigran G.; Ossenkopf, Volker

    2016-01-01

    Aims: We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well-defined spatial properties, we calibrate the method and test its limitations to apply it ultimately to a set of observed maps. Methods: We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. Results: We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behaviour without prominent scales. Observational noise and a finite map size limit the scales on which the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from observational noise is negligible for signal-to-noise ratios ≳5. We propose an approach for the identification of correlated structures in the maps, which allows us to localize individual correlated structures and recognize their shapes and suggest a recipe for recovering enhanced scales in self-similar structures. The application of the WWCC to the observed line maps of the giant molecular cloud G 333 allows us to add specific scale information to the results obtained earlier using the principal component analysis. The WWCC confirms the chemical and excitation similarity of 13CO and C18O on all scales, but shows a deviation of HCN at scales of up to 7 pc. This can be interpreted as a chemical transition scale. The largest structures also show a systematic offset along the filament, probably due to a large-scale density gradient. Conclusions: The WWCC can compare correlated structures in different maps of molecular clouds identifying scales that represent structural changes, such as chemical and phase transitions

  16. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of

  17. Distributed density estimation in sensor networks based on variational approximations

    NASA Astrophysics Data System (ADS)

    Safarinejadian, Behrooz; Menhaj, Mohammad B.

    2011-09-01

    This article presents a peer-to-peer (P2P) distributed variational Bayesian (P2PDVB) algorithm for density estimation and clustering in sensor networks. It is assumed that measurements of the nodes can be statistically modelled by a common Gaussian mixture model. The variational approach allows the simultaneous estimate of the component parameters and the model complexity. In this algorithm, each node independently calculates local sufficient statistics first by using local observations. A P2P averaging approach is then used to diffuse local sufficient statistics to neighbours and estimate global sufficient statistics in each node. Finally, each sensor node uses the estimated global sufficient statistics to estimate the model order as well as the parameters of this model. Because the P2P averaging approach only requires that each node communicate with its neighbours, the P2PDVB algorithm is scalable and robust. Diffusion speed and convergence of the proposed algorithm are also studied. Finally, simulated and real data sets are used to verify the remarkable performance of proposed algorithm.

  18. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  19. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a

  20. Dust-cloud density estimation using a single wavelength lidar

    NASA Astrophysics Data System (ADS)

    Youmans, Douglas G.; Garner, Richard C.; Petersen, Kent R.

    1994-09-01

    The passage of commercial and military aircraft through invisible fresh volcanic ash clouds has caused damage to many airplanes. On December 15, 1989 all four engines of a KLM Boeing 747 were temporarily extinguished in a flight over Alaska resulting in $DOL80 million for repair. Similar aircraft damage to control systems, FLIR/EO windows, wind screens, radomes, aircraft leading edges, and aircraft data systems were reported in Operation Desert Storm during combat flights through high-explosive and naturally occurring desert dusts. The Defense Nuclear Agency is currently developing a compact and rugged lidar under the Aircraft Sensors Program to detect and estimate the mass density of nuclear-explosion produced dust clouds, high-explosive produced dust clouds, and fresh volcanic dust clouds at horizontal distances of up to 40 km from an aircraft. Given this mass density information, the pilot has an option of avoiding or flying through the upcoming cloud.

  1. Estimation of Volumetric Breast Density from Digital Mammograms

    NASA Astrophysics Data System (ADS)

    Alonzo-Proulx, Olivier

    Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.

  2. Multivariate mixtures of Erlangs for density estimation under censoring.

    PubMed

    Verbelen, Roel; Antonio, Katrien; Claeskens, Gerda

    2016-07-01

    Multivariate mixtures of Erlang distributions form a versatile, yet analytically tractable, class of distributions making them suitable for multivariate density estimation. We present a flexible and effective fitting procedure for multivariate mixtures of Erlangs, which iteratively uses the EM algorithm, by introducing a computationally efficient initialization and adjustment strategy for the shape parameter vectors. We furthermore extend the EM algorithm for multivariate mixtures of Erlangs to be able to deal with randomly censored and fixed truncated data. The effectiveness of the proposed algorithm is demonstrated on simulated as well as real data sets. PMID:26340888

  3. Nonparametric estimation of multivariate scale mixtures of uniform densities

    PubMed Central

    Pavlides, Marios G.; Wellner, Jon A.

    2012-01-01

    Suppose that U = (U1, … , Ud) has a Uniform ([0, 1]d) distribution, that Y = (Y1, … , Yd) has the distribution G on R+d, and let X = (X1, … , Xd) = (U1Y1, … , UdYd). The resulting class of distributions of X (as G varies over all distributions on R+d) is called the Scale Mixture of Uniforms class of distributions, and the corresponding class of densities on R+d is denoted by FSMU(d). We study maximum likelihood estimation in the family FSMU(d). We prove existence of the MLE, establish Fenchel characterizations, and prove strong consistency of the almost surely unique maximum likelihood estimator (MLE) in FSMU(d). We also provide an asymptotic minimax lower bound for estimating the functional f ↦ f(x) under reasonable differentiability assumptions on f ∈ FSMU(d) in a neighborhood of x. We conclude the paper with discussion, conjectures and open problems pertaining to global and local rates of convergence of the MLE. PMID:22485055

  4. Hierarchical Multiscale Adaptive Variable Fidelity Wavelet-based Turbulence Modeling with Lagrangian Spatially Variable Thresholding

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza

    The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work

  5. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  6. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  7. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGESBeta

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  8. Estimating tropical-forest density profiles from multibaseline interferometric SAR

    NASA Technical Reports Server (NTRS)

    Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

    2006-01-01

    Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

  9. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  10. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    PubMed

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates. PMID:26729880

  11. Cortical cell and neuron density estimates in one chimpanzee hemisphere

    PubMed Central

    Collins, Christine E.; Turner, Emily C.; Sawyer, Eva Kille; Reed, Jamie L.; Young, Nicole A.; Flaherty, David K.; Kaas, Jon H.

    2016-01-01

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm2 of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates. PMID:26729880

  12. Hierarchical wavelet-based image model for pattern analysis and synthesis

    NASA Astrophysics Data System (ADS)

    Scott, Clayton D.; Nowak, Robert D.

    2000-12-01

    Despite their success in other areas of statistical signal processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown pattern transformations. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR, a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length complexity regularization to learn a template with a sparse representation in the wavelet domain. We illustrate template learning with examples, and discuss how TEMPLAR applies to pattern classification and denoising from multiple, unaligned observations.

  13. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    NASA Astrophysics Data System (ADS)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  14. Joint wavelet-based coding and packetization for video transport over packet-switched networks

    NASA Astrophysics Data System (ADS)

    Lee, Hung-ju

    1996-02-01

    In recent years, wavelet theory applied to image, and audio and video compression has been extensively studied. However, only gaining compression ratio without considering the underlying networking systems is unrealistic, especially for multimedia applications over networks. In this paper, we present an integrated approach, which attempts to preserve the advantages of wavelet-based image coding scheme and to provide robustness to a certain extent for lost packets over packet-switched networks. Two different packetization schemes, called the intrablock-oriented (IAB) and interblock-oriented (IRB) schemes, in conjunction with wavelet-based coding, are presented. Our approach is evaluated under two different packet loss models with various packet loss probabilities through simulations which are driven by real video sequences.

  15. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGESBeta

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  16. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    NASA Technical Reports Server (NTRS)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  17. Research of the wavelet based ECW remote sensing image compression technology

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

    2007-11-01

    This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

  18. Recognition of short-term changes in physiological signals with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Sindeeva, Olga A.; Sindeev, Sergey S.; Pavlova, Olga N.; Rybalova, Elena V.; Semyachkina-Glushkovskaya, Oxana V.

    2016-03-01

    In this paper we address the problem of revealing and recognition transitions between distinct physiological states using quite short fragments of experimental recordings. With the wavelet-based multifractal analysis we characterize changes of complexity and correlation properties in the stress-induced dynamics of arterial blood pressure in rats. We propose an approach for association revealed changes with distinct physiological regulatory mechanisms and for quantifying the influence of each mechanism.

  19. Wavelet-Based Real-Time Diagnosis of Complex Systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Mackey, Ryan

    2003-01-01

    A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

  20. Application of wavelet-based multiple linear regression model to rainfall forecasting in Australia

    NASA Astrophysics Data System (ADS)

    He, X.; Guan, H.; Zhang, X.; Simmons, C.

    2013-12-01

    In this study, a wavelet-based multiple linear regression model is applied to forecast monthly rainfall in Australia by using monthly historical rainfall data and climate indices as inputs. The wavelet-based model is constructed by incorporating the multi-resolution analysis (MRA) with the discrete wavelet transform and multiple linear regression (MLR) model. The standardized monthly rainfall anomaly and large-scale climate index time series are decomposed using MRA into a certain number of component subseries at different temporal scales. The hierarchical lag relationship between the rainfall anomaly and each potential predictor is identified by cross correlation analysis with a lag time of at least one month at different temporal scales. The components of predictor variables with known lag times are then screened with a stepwise linear regression algorithm to be selectively included into the final forecast model. The MRA-based rainfall forecasting method is examined with 255 stations over Australia, and compared to the traditional multiple linear regression model based on the original time series. The models are trained with data from the 1959-1995 period and then tested in the 1996-2008 period for each station. The performance is compared with observed rainfall values, and evaluated by common statistics of relative absolute error and correlation coefficient. The results show that the wavelet-based regression model provides considerably more accurate monthly rainfall forecasts for all of the selected stations over Australia than the traditional regression model.

  1. Usefulness of wavelet-based features as global descriptors of VHR satellite images

    NASA Astrophysics Data System (ADS)

    Pyka, Krystian; Drzewiecki, Wojciech; Bernat, Katarzyna; Wawrzaszek, Anna; Krupiński, Michal

    2014-10-01

    In this paper we present the results of research carried out to assess the usefulness of wavelet-based measures of image texture for classification of panchromatic VHR satellite image content. The study is based on images obtained from EROS-A satellite. Wavelet-based features are calculated according to two approaches. In first one the wavelet energy is calculated for each components from every level of decomposition using Haar wavelet. In second one the variance and kurtosis are calculated as mean values of detail components with filters belonging to the D, LA, MB groups of various lengths. The results indicate that both approaches are useful and complement one another. Among the most useful wavelet-based features are present not only those calculated with short or long filters, but also with the filters of intermediate length. Usage of filters of different type and length as well as different statistical parameters (variance, kurtosis) calculated as means for each decomposition level improved the discriminative properties of the feature vector consisted initially of wavelet energies of each component.

  2. Wavelet-based vector quantization for high-fidelity compression and fast transmission of medical images.

    PubMed

    Mitra, S; Yang, S; Kustov, V

    1998-11-01

    Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058

  3. Comparative study of different wavelet based neural network models for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

    2014-07-01

    The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

  4. Numerical Modeling of Global Atmospheric Chemical Transport with Wavelet-based Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2012-12-01

    In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as

  5. Wavelet-based stereo images reconstruction using depth images

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2007-09-01

    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate

  6. Atmospheric turbulence mitigation using complex wavelet-based fusion.

    PubMed

    Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R

    2013-06-01

    Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359

  7. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy. PMID:22410322

  8. Wavelets based algorithm for the evaluation of enhanced liver areas

    NASA Astrophysics Data System (ADS)

    Alvarez, Matheus; Rodrigues de Pina, Diana; Giacomini, Guilherme; Gomes Romeiro, Fernando; Barbosa Duarte, Sérgio; Yamashita, Seizo; de Arruda Miranda, José Ricardo

    2014-03-01

    Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. 63 computed tomography (CT) slices from 23 patients were assessed. Noncontrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.

  9. Wavelet-based coherence measures of global seismic noise properties

    NASA Astrophysics Data System (ADS)

    Lyubushin, A. A.

    2015-04-01

    The coherent behavior of four parameters characterizing the global field of low-frequency (periods from 2 to 500 min) seismic noise is studied. These parameters include generalized Hurst exponent, multifractal singularity spectrum support width, the normalized entropy of variance, and kurtosis. The analysis is based on the data from 229 broadband stations of GSN, GEOSCOPE, and GEOFON networks for a 17-year period from the beginning of 1997 to the end of 2013. The entire set of stations is subdivided into eight groups, which, taken together, provide full coverage of the Earth. The daily median values of the studied noise parameters are calculated in each group. This procedure yields four 8-dimensional time series with a time step of 1 day with a length of 6209 samples in each scalar component. For each of the four 8-dimensional time series, a multiple correlation measure is estimated, which is based on computing robust canonical correlations for the Haar wavelet coefficients at the first detail level within a moving time window of the length 365 days. These correlation measures for each noise property demonstrate essential increasing starting from 2007 to 2008 which was continued till the end of 2013. Taking into account a well-known phenomenon of noise correlation increasing before catastrophes, this increasing of seismic noise synchronization is interpreted as indicators of the strongest (magnitudes not less than 8.5) earthquakes activation which is observed starting from the Sumatra mega-earthquake of 26 Dec 2004. This synchronization continues growing up to the end of the studied period (2013), which can be interpreted as a probable precursor of the further increase in the intensity of the strongest earthquakes all over the world.

  10. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  11. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963

  12. A generalized model for estimating the energy density of invertebrates

    USGS Publications Warehouse

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p < 0.0001), where ED (as J/g wet mass) was estimated from pDM as ED  =  22,960pDM − 174.2. Model evaluation showed that nearly all (98.8%) of the variability between observed and predicted values for invertebrate ED could be attributed to residual error in the model. Regression of observed on predicted values revealed that the 97.5% joint confidence region included the intercept of 0 (−103.0 ± 707.9) and slope of 1 (1.01 ± 0.12). Use of this model requires that only dry and wet mass measurements be obtained, resulting in significant time, sample size, and cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  13. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  14. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  15. Etalon-photometric method for estimation of tissues density at x-ray images

    NASA Astrophysics Data System (ADS)

    Buldakov, Nicolay S.; Buldakova, Tatyana I.; Suyatinov, Sergey I.

    2016-04-01

    The etalon-photometric method for quantitative estimation of physical density of pathological entities is considered. The method consists in using etalon during the registration and estimation of photometric characteristics of objects. The algorithm for estimating of physical density at X-ray images is offered.

  16. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  17. Optimal block boundary pre/postfiltering for wavelet-based image and video compression.

    PubMed

    Liang, Jie; Tu, Chengjie; Tran, Trac D

    2005-12-01

    This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467

  18. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  19. Automatic quantitative analysis of ultrasound tongue contours via wavelet-based functional mixed models.

    PubMed

    Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S

    2015-02-01

    This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms. PMID:25698047

  20. Information passage from acoustic impedance to seismogram: Perspectives from wavelet-based multiscale analysis

    NASA Astrophysics Data System (ADS)

    Li, Chun-Feng

    2004-07-01

    Traditional seismic interpretation of surface seismic data is focused primarily on seismic oscillation. Rich singularity information carried by, but deeply buried in, seismic data is often ignored. We show that wavelet-based singularity analysis reveals generic singularity information conducted from acoustic impedance to seismogram. The singularity exponents (known as Hölder exponent α) calculated from seismic data are independent of amplitude and robust to phase changes and noises. These unique properties of α offer potentially important application in many fields, especially in studying seismic data interpretation, processing, inversion, and wave attenuation.

  1. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  2. An economic prediction of refinement coefficients in wavelet-based adaptive methods for electron structure calculations.

    PubMed

    Pipek, János; Nagy, Szilvia

    2013-03-01

    The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109

  3. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    PubMed

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology. PMID:26236833

  4. A maximum entropy kernel density estimator with applications to function interpolation and texture segmentation

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Nikhil; Schonfeld, Dan

    2006-02-01

    In this paper, we develop a new algorithm to estimate an unknown probability density function given a finite data sample using a tree shaped kernel density estimator. The algorithm formulates an integrated squared error based cost function which minimizes the quadratic divergence between the kernel density and the Parzen density estimate. The cost function reduces to a quadratic programming problem which is minimized within the maximum entropy framework. The maximum entropy principle acts as a regularizer which yields a smooth solution. A smooth density estimate enables better generalization to unseen data and offers distinct advantages in high dimensions and cases where there is limited data. We demonstrate applications of the hierarchical kernel density estimator for function interpolation and texture segmentation problems. When applied to function interpolation, the kernel density estimator improves performance considerably in situations where the posterior conditional density of the dependent variable is multimodal. The kernel density estimator allows flexible non parametric modeling of textures which improves performance in texture segmentation algorithms. We demonstrate performance on a text labeling problem which shows performance of the algorithm in high dimensions. The hierarchical nature of the density estimator enables multiresolution solutions depending on the complexity of the data. The algorithm is fast and has at most quadratic scaling in the number of kernels.

  5. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    PubMed Central

    G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

  6. An adaptive wavelet-based deblocking algorithm for MPEG-4 codec

    NASA Astrophysics Data System (ADS)

    Truong, Trieu-Kien; Chen, Shi-Huang; Jhang, Rong-Yi

    2005-08-01

    This paper proposed an adaptive wavelet-based deblocking algorithm for MPEG-4 video coding standard. The novelty of this method is that the deblocking filter uses a wavelet-based threshold to detect and analyze artifacts on coded block boundaries. This threshold value is based on the difference between the wavelet transform coefficients of image blocks and the coefficients of the entire image. Therefore, the threshold value is made adaptive to different images and characteristics of blocking artifacts. Then one can attenuate those artifacts by applying a selected filter based on the above threshold value. It is shown in this paper that the proposed method is robust, fast, and works remarkably well for MPEG-4 codec at low bit rates. Another advantage of the new method is that it retains sharp features in the decoded frames since it only removes artifacts. Experimental results show that the proposed method can achieve a significantly improved visual quality and increase the PSNR in the decoded video frame.

  7. Wavelet-based neural network analysis of internal carotid arterial Doppler signals.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2006-06-01

    In this study, internal carotid arterial Doppler signals recorded from 130 subjects, where 45 of them suffered from internal carotid artery stenosis, 44 of them suffered from internal carotid artery occlusion and the rest of them were healthy subjects, were classified using wavelet-based neural network. Wavelet-based neural network model, employing the multilayer perceptron, was used for analysis of the internal carotid arterial Doppler signals. Multi-layer perceptron neural network (MLPNN) trained with the Levenberg-Marquardt algorithm was used to detect stenosis and occlusion in internal carotid arteries. In order to determine the MLPNN inputs, spectral analysis of the internal carotid arterial Doppler signals was performed using wavelet transform (WT). The MLPNN was trained, cross validated, and tested with training, cross validation, and testing sets, respectively. All these data sets were obtained from internal carotid arteries of healthy subjects, subjects suffering from internal carotid artery stenosis and occlusion. The correct classification rate was 96% for healthy subjects, 96.15% for subjects having internal carotid artery stenosis and 96.30% for subjects having internal carotid artery occlusion. The classification results showed that the MLPNN trained with the Levenberg-Marquardt algorithm was effective to detect internal carotid artery stenosis and occlusion. PMID:16848135

  8. A wavelet-based data pre-processing analysis approach in mass spectrometry.

    PubMed

    Li, Xiaoli; Li, Jin; Yao, Xin

    2007-04-01

    Recently, mass spectrometry analysis has a become an effective and rapid approach in detecting early-stage cancer. To identify proteomic patterns in serum to discriminate cancer patients from normal individuals, machine-learning methods, such as feature selection and classification, have already been involved in the analysis of mass spectrometry (MS) data with some success. However, the performance of existing machine learning methods for MS data analysis still needs improving. The study in this paper proposes a wavelet-based pre-processing approach to MS data analysis. The approach applies wavelet-based transforms to MS data with the aim of de-noising the data that are potentially contaminated in acquisition. The effects of the selection of wavelet function and decomposition level on the de-noising performance have also been investigated in this study. Our comparative experimental results demonstrate that the proposed de-noising pre-processing approach has potentials to remove possible noise embedded in MS data, which can lead to improved performance for existing machine learning methods in cancer detection. PMID:16982045

  9. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  10. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  11. EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS1

    PubMed Central

    Mukherjee, Gourab; Johnstone, Iain M.

    2015-01-01

    We consider estimating the predictive density under Kullback–Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates. PMID:26448678

  12. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional

  13. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  14. Density estimation and population growth of mosquitofish (Gambusia affinis) in rice fields.

    PubMed

    Stewart, R J; Miura, T

    1985-03-01

    Mark-release-recapture estimation of population density was performed with mosquitofish in a rice field habitat. Regression analysis showed a relationship between absolute density and mean number of fish per trap. Trap counts were converted to density estimates with data from several fields and growth curves were calculated to describe seasonal growth of mosquitofish populations at three different initial stocking rates. The calculated curves showed a good correspondence to field populations of mosquitofish. PMID:2906659

  15. Estimated global nitrogen deposition using NO2 column density

    USGS Publications Warehouse

    Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

    2013-01-01

    Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m−2 yr−1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m−2 yr−1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

  16. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation

  17. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Astrophysics Data System (ADS)

    Godines, Cody R.; Manteufel, Randall D.

    2002-12-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation

  18. Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.

    2013-12-01

    The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.

  19. Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis

    PubMed Central

    Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

    2014-01-01

    To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

  20. Wavelet-based adaptive numerical simulation of unsteady 3D flow around a bluff body

    NASA Astrophysics Data System (ADS)

    de Stefano, Giuliano; Vasilyev, Oleg

    2012-11-01

    The unsteady three-dimensional flow past a two-dimensional bluff body is numerically simulated using a wavelet-based method. The body is modeled by exploiting the Brinkman volume-penalization method, which results in modifying the governing equations with the addition of an appropriate forcing term inside the spatial region occupied by the obstacle. The volume-penalized incompressible Navier-Stokes equations are numerically solved by means of the adaptive wavelet collocation method, where the non-uniform spatial grid is dynamically adapted to the flow evolution. The combined approach is successfully applied to the simulation of vortex shedding flow behind a stationary prism with square cross-section. The computation is conducted at transitional Reynolds numbers, where fundamental unstable three-dimensional vortical structures exist, by well-predicting the unsteady forces arising from fluid-structure interaction.

  1. Wavelet-based enhancement for detection of left ventricular myocardial boundaries in magnetic resonance images.

    PubMed

    Fu, J C; Chai, J W; Wong, S T

    2000-11-01

    MRI is noninvasive and generates clear images, giving it great potential as a diagnostic instrument. However, current methods of image analysis are too time-consuming for dynamic systems such as the cardiovascular system. Since dynamic imagery generate a huge number of images, a computer aided machine vision diagnostic tool is essential for implementing MRI-based measurement. In this paper, a wavelet-based image technique is applied to enhance left ventricular endocardial and epicardial profiles as the preprocessor for a dynamic programming-based automatic border detection algorithm. Statistical tests are conducted to verify the performance of the enhancement technique by comparing borders manually drawn with 1. borders generated from the enhanced images, and 2. borders generated for the original images. PMID:11118768

  2. FAST TRACK COMMUNICATION: From cardinal spline wavelet bases to highly coherent dictionaries

    NASA Astrophysics Data System (ADS)

    Andrle, Miroslav; Rebollo-Neira, Laura

    2008-05-01

    Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation.

  3. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  4. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  5. A linear quality control design for high efficient wavelet-based ECG data compression.

    PubMed

    Hung, King-Chu; Tsai, Chin-Feng; Ku, Cheng-Tung; Wang, Huan-Sheng

    2009-05-01

    In ECG data compression, maintaining reconstructed signal with desired quality is crucial for clinical application. In this paper, a linear quality control design based on the reversible round-off non-recursive discrete periodized wavelet transform (RRO-NRDPWT) is proposed for high efficient ECG data compression. With the advantages of error propagation resistance and octave coefficient normalization, RRO-NRDPWT enables the non-linear quantization control to obtain an approximately linear distortion by using a single control variable. Based on the linear programming, a linear quantization scale prediction model is presented for the quality control of reconstructed ECG signal. Following the use of the MIT-BIH arrhythmia database, the experimental results show that the proposed system, with lower computational complexity, can obtain much better quality control performance than that of other wavelet-based systems. PMID:19070935

  6. Wavelet-based Poisson Solver for use in Particle-In-CellSimulations

    SciTech Connect

    Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

    2005-05-13

    We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.

  7. RADIATION PRESSURE DETECTION AND DENSITY ESTIMATE FOR 2011 MD

    SciTech Connect

    Micheli, Marco; Tholen, David J.; Elliott, Garrett T. E-mail: tholen@ifa.hawaii.edu

    2014-06-10

    We present our astrometric observations of the small near-Earth object 2011 MD (H ∼ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5σ. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of ρ=(640±330)kg m{sup −3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

  8. Neuromagnetic correlates of developmental changes in endogenous high-frequency brain oscillations in children: a wavelet-based beamformer study.

    PubMed

    Xiang, Jing; Liu, Yang; Wang, Yingying; Kotecha, Rupesh; Kirtman, Elijah G; Chen, Yangmei; Huo, Xiaolin; Fujiwara, Hisako; Hemasilpin, Nat; DeGrauw, Ton; Rose, Douglas

    2009-06-01

    Recent studies have found that the brain generates very fast oscillations. The objective of the present study was to investigate the spectral, spatial and coherent features of high-frequency brain oscillations in the developing brain. Sixty healthy children and 20 healthy adults were studied using a 275-channel magnetoencephalography (MEG) system. MEG data were digitized at 12,000 Hz. The frequency characteristics of neuromagnetic signals in 0.5-2000 Hz were quantitatively determined with Morlet wavelet transform. The magnetic sources were volumetrically estimated with wavelet-based beamformer at 2.5 mm resolution. The neural networks of endogenous brain oscillations were analyzed with coherent imaging. Neuromagnetic activities in 8-12 Hz and 800-900 Hz were found to be the most reliable frequency bands in healthy children. The neuromagnetic signals were localized in the occipital, temporal and frontal cortices. The activities in the occipital and temporal cortices were strongly correlated in 8-12 Hz but not in 800-900 Hz. In comparison to adults, children had brain oscillations in intermingled frequency bands. Developmental changes in children were identified for both low- and high-frequency brain activities. The results of the present study suggest that the development of the brain is associated with spatial and coherent changes of endogenous brain activities in both low- and high-frequency ranges. Analysis of high-frequency neuromagnetic oscillation may provide novel insights into cerebral mechanisms of brain function. The noninvasive measurement of neuromagnetic brain oscillations in the developing brain may open a new window for analysis of brain function. PMID:19362072

  9. Wavelet-based functional linear mixed models: an application to measurement error–corrected distributed lag models

    PubMed Central

    Malloy, Elizabeth J.; Morris, Jeffrey S.; Adar, Sara D.; Suh, Helen; Gold, Diane R.; Coull, Brent A.

    2010-01-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1–7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants. PMID:20156988

  10. Estimation of a k-monotone density: characterizations, consistency and minimax lower bounds

    PubMed Central

    Balabdaoui, Fadoua; Wellner, Jon A.

    2010-01-01

    The classes of monotone or convex (and necessarily monotone) densities on ℝ+ can be viewed as special cases of the classes of k-monotone densities on ℝ+. These classes bridge the gap between the classes of monotone (1-monotone) and convex decreasing (2-monotone) densities for which asymptotic results are known, and the class of completely monotone (∞-monotone) densities on ℝ+. In this paper we consider non-parametric maximum likelihood and least squares estimators of a k-monotone density g0.We prove existence of the estimators and give characterizations. We also establish consistency properties, and show that the estimators are splines of degree k − 1 with simple knots. We further provide asymptotic minimax risk lower bounds for estimating the derivativesg0(j)(x0),0=1,…,k−1, at a fixed point x0 under the assumption that (−1)kg0(k)(x0)>0. PMID:20436949

  11. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Robbins, C.S.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  12. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  13. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia

    PubMed Central

    Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  14. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  15. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  16. In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...

  17. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  18. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  19. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  20. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  1. Statistical Analysis of Photopyroelectric Signals using Histogram and Kernel Density Estimation for differentiation of Maize Seeds

    NASA Astrophysics Data System (ADS)

    Rojas-Lima, J. E.; Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2016-09-01

    Considering the necessity of photothermal alternative approaches for characterizing nonhomogeneous materials like maize seeds, the objective of this research work was to analyze statistically the amplitude variations of photopyroelectric signals, by means of nonparametric techniques such as the histogram and the kernel density estimator, and the probability density function of the amplitude variations of two genotypes of maize seeds with different pigmentations and structural components: crystalline and floury. To determine if the probability density function had a known parametric form, the histogram was determined which did not present a known parametric form, so the kernel density estimator using the Gaussian kernel, with an efficiency of 95 % in density estimation, was used to obtain the probability density function. The results obtained indicated that maize seeds could be differentiated in terms of the statistical values for floury and crystalline seeds such as the mean (93.11, 159.21), variance (1.64× 103, 1.48× 103), and standard deviation (40.54, 38.47) obtained from the amplitude variations of photopyroelectric signals in the case of the histogram approach. For the case of the kernel density estimator, seeds can be differentiated in terms of kernel bandwidth or smoothing constant h of 9.85 and 6.09 for floury and crystalline seeds, respectively.

  2. Sea ice density estimation in the Bohai Sea using the hyperspectral remote sensing technology

    NASA Astrophysics Data System (ADS)

    Liu, Chengyu; Shao, Honglan; Xie, Feng; Wang, Jianyu

    2014-11-01

    Sea ice density is one of the significant physical properties of sea ice and the input parameters in the estimation of the engineering mechanical strength and aerodynamic drag coefficients; also it is an important indicator of the ice age. The sea ice in the Bohai Sea is a solid, liquid and gas-phase mixture composed of pure ice, brine pockets and bubbles, the density of which is mainly affected by the amount of brine pockets and bubbles. The more the contained brine pockets, the greater the sea ice density; the more the contained bubbles, the smaller the sea ice density. The reflectance spectrum in 350~2500 nm and density of sea ice of different thickness and ages were measured in the Liaodong Bay of the Bohai Sea during the glacial maximum in the winter of 2012-2013. According to the measured sea ice density and reflectance spectrum, the characteristic bands that can reflect the sea ice density variation were found, and the sea ice density spectrum index (SIDSI) of the sea ice in the Bohai Sea was constructed. The inversion model of sea ice density in the Bohai Sea which refers to the layer from surface to the depth of penetration by the light was proposed at last. The sea ice density in the Bohai Sea was estimated using the proposed model from Hyperion image which is a hyperspectral image. The results show that the error of the sea ice density inversion model is about 0.0004 g•cm-3. The sea ice density can be estimated through hyperspectral remote sensing images, which provide the data support to the related marine science research and application.

  3. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect

    PubMed Central

    Ku, Bon Ki; Evans, Douglas E.

    2015-01-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of

  4. A wavelet-based approach to detecting liveness in fingerprint scanners

    NASA Astrophysics Data System (ADS)

    Abhyankar, Aditya S.; Schuckers, Stephanie C.

    2004-08-01

    In this work, a method to provide fingerprint vitality authentication, in order to improve vulnerability of fingerprint identification systems to spoofing is introduced. The method aims at detecting 'liveness' in fingerprint scanners by using the physiological phenomenon of perspiration. A wavelet based approach is used which concentrates on the changing coefficients using the zoom-in property of the wavelets. Multiresolution analysis and wavelet packet analysis are used to extract information from low frequency and high frequency content of the images respectively. Daubechies wavelet is designed and implemented to perform the wavelet analysis. A threshold is applied to the first difference of the information in all the sub-bands. The energy content of the changing coefficients is used as a quantified measure to perform the desired classification, as they reflect a perspiration pattern. A data set of approximately 30 live, 30 spoof, and 14 cadaver fingerprint images was divided with first half as a training data while the other half as the testing data. The proposed algorithm was applied to the training data set and was able to completely classify 'live' fingers from 'not live' fingers, thus providing a method for enhanced security and improved spoof protection.

  5. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  6. Finding the multipath propagation of multivariable crude oil prices using a wavelet-based network approach

    NASA Astrophysics Data System (ADS)

    Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun

    2016-04-01

    The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.

  7. A wavelet-based image quality metric for the assessment of 3D synthesized views

    NASA Astrophysics Data System (ADS)

    Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick

    2013-03-01

    In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.

  8. Online Epileptic Seizure Prediction Using Wavelet-Based Bi-Phase Correlation of Electrical Signals Tomography.

    PubMed

    Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh

    2015-09-01

    Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients. PMID:26126613

  9. Wavelet-Based ECG Steganography for Protecting Patient Confidential Information in Point-of-Care Systems.

    PubMed

    Ibaida, Ayman; Khalil, Ibrahim

    2013-12-01

    With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as point-of-care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by body sensor networks from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level, etc., and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data are being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet-based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the percentage residual difference and the wavelet weighted PRD. It is found that the proposed technique provides high-security protection for patients data with low (less than 1%) distortion and ECG data remain diagnosable after watermarking (i.e., hiding patient confidential data) and as well as after watermarks (i.e., hidden data) are removed from the watermarked data. PMID:23708767

  10. Application of wavelet-based neural network on DNA microarray data.

    PubMed

    Lee, Jack; Zee, Benny

    2008-01-01

    The advantage of using DNA microarray data when investigating human cancer gene expressions is its ability to generate enormous amount of information from a single assay in order to speed up the scientific evaluation process. The number of variables from the gene expression data coupled with comparably much less number of samples creates new challenges to scientists and statisticians. In particular, the problems include enormous degree of collinearity among genes expressions, likely violation of model assumptions as well as high level of noise with potential outliers. To deal with these problems, we propose a block wavelet shrinkage principal component (BWSPCA) analysis method to optimize the information during the noise reduction process. This paper firstly uses the National Cancer Institute database (NC160) as an illustration and shows a significant improvement in dimension reduction. Secondly we combine BWSPCA with an artificial neural network-based gene minimization strategy to establish a Block Wavelet-based Neural Network model in a robust and accurate cancer classification process (BWNN). Our extensive experiments on six public cancer datasets have shown that the method of BWNN for tumor classification performed well, especially on some difficult instances with large-class (more than two) expression data. This proposed method is extremely useful for data denoising and is competitiveness with respect to other methods such as BagBoost, RandomForest (RanFor), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN). PMID:19255638

  11. Radiation dose reduction in digital radiography using wavelet-based image processing methods

    NASA Astrophysics Data System (ADS)

    Watanabe, Haruyuki; Tsai, Du-Yih; Lee, Yongbum; Matsuyama, Eri; Kojima, Katsuyuki

    2011-03-01

    In this paper, we investigate the effect of the use of wavelet transform for image processing on radiation dose reduction in computed radiography (CR), by measuring various physical characteristics of the wavelet-transformed images. Moreover, we propose a wavelet-based method for offering a possibility to reduce radiation dose while maintaining a clinically acceptable image quality. The proposed method integrates the advantages of a previously proposed technique, i.e., sigmoid-type transfer curve for wavelet coefficient weighting adjustment technique, as well as a wavelet soft-thresholding technique. The former can improve contrast and spatial resolution of CR images, the latter is able to improve the performance of image noise. In the investigation of physical characteristics, modulation transfer function, noise power spectrum, and contrast-to-noise ratio of CR images processed by the proposed method and other different methods were measured and compared. Furthermore, visual evaluation was performed using Scheffe's pair comparison method. Experimental results showed that the proposed method could improve overall image quality as compared to other methods. Our visual evaluation showed that an approximately 40% reduction in exposure dose might be achieved in hip joint radiography by using the proposed method.

  12. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions

    PubMed Central

    Daqrouq, K.; Dobaie, A.

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  13. Image-based scene representation using wavelet-based interval morphing

    NASA Astrophysics Data System (ADS)

    Bao, Paul; Xu, Dan

    1999-07-01

    Scene appearance for a continuous range of viewpoint can be represented by a discrete set of images via image morphing. In this paper, we present a new robust image morphing scheme based on 2D wavelet transform and interval field interpolation. Traditional mesh-base and field-based morphing algorithms, usually designed in the spatial image space, suffer from very high time complexity and therefore make themselves impractical in real-time virtual environment applications. Compared with traditional morphing methods, the proposed wavelet-based interval morphing scheme performs interval interpolation in both the frequency and spatial spaces. First, the images of the scene can be significantly compressed in the frequency domain with little degradation in visual quality and therefore the complexity of the scene can be significantly reduced. Second, since a feature point in the image may correspond to a neighborhood in a subband image in the wavelet domain, we define feature interval for the wavelet-transformed images for an accurate feature matching between the morphing images. Based on the feature intervals, we employ the interval field interpolation to morph the images progressively in a coarse-to-fine process. Finally, we use a post-warping procedure to transform the interpolated views to its desired position. A nice future of using wavelet transformation is its multiresolution representation mode, which enables the progressive morphing of scene.

  14. Selective error detection for error-resilient wavelet-based image coding.

    PubMed

    Karam, Lina J; Lam, Tuyet-Trang

    2007-12-01

    This paper introduces the concept of a similarity check function for error-resilient multimedia data transmission. The proposed similarity check function provides information about the effects of corrupted data on the quality of the reconstructed image. The degree of data corruption is measured by the similarity check function at the receiver, without explicit knowledge of the original source data. The design of a perceptual similarity check function is presented for wavelet-based coders such as the JPEG2000 standard, and used with a proposed "progressive similarity-based ARQ" (ProS-ARQ) scheme to significantly decrease the retransmission rate of corrupted data while maintaining very good visual quality of images transmitted over noisy channels. Simulation results with JPEG2000-coded images transmitted over the Binary Symmetric Channel, show that the proposed ProS-ARQ scheme significantly reduces the number of retransmissions as compared to conventional ARQ-based schemes. The presented results also show that, for the same number of retransmitted data packets, the proposed ProS-ARQ scheme can achieve significantly higher PSNR and better visual quality as compared to the selective-repeat ARQ scheme. PMID:18092593

  15. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis

    PubMed Central

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

  16. Incipient interturn fault diagnosis in induction machines using an analytic wavelet-based optimized Bayesian inference.

    PubMed

    Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan

    2014-05-01

    Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044

  17. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  18. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. PMID:26774422

  19. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions.

    PubMed

    Daqrouq, K; Dobaie, A

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  20. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  1. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  2. A real-time wavelet-based video decoder using SIMD technology

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin

    2008-02-01

    This paper presents a fast implementation of a wavelet-based video codec. The codec consists of motion-compensated temporal filtering (MCTF), 2-D spatial wavelet transform, and SPIHT for wavelet coefficient coding. It offers compression efficiency that is competitive to H.264. The codec is implemented in software running on a general purpose PC, using C programming language and streaming SIMD extensions intrinsics, without assembly language. This high-level software implementation allows the codec to be portable to other general-purpose computing platforms. Testing with a Pentium 4 HT at 3.6GHz (running under Linux and using the GCC compiler, version 4), shows that the software decoder is able to decode 4CIF video in real-time, over 2 times faster than software written only in C language. This paper describes the structure of the codec, the fast algorithms chosen for the most computationally intensive elements in the codec, and the use of SIMD to implement these algorithms.

  3. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    PubMed Central

    Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  4. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.

    PubMed

    Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  5. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  6. The Analysis of Surface EMG Signals with the Wavelet-Based Correlation Dimension Method

    PubMed Central

    Zhang, Yanyan; Wang, Jue

    2014-01-01

    Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240

  7. Segmentation of complementary DNA microarray images by wavelet-based Markov random field model.

    PubMed

    Athanasiadis, Emmanouil I; Cavouras, Dionisis A; Glotsos, Dimitris Th; Georgiadis, Pantelis V; Kalatzis, Ioannis K; Nikiforidis, George C

    2009-11-01

    A wavelet-based modification of the Markov random field (WMRF) model is proposed for segmenting complementary DNA (cDNA) microarray images. For evaluation purposes, five simulated and a set of five real microarray images were used. The one-level stationary wavelet transform (SWT) of each microarray image was used to form two images, a denoised image, using hard thresholding filter, and a magnitude image, from the amplitudes of the horizontal and vertical components of SWT. Elements from these two images were suitably combined to form the WMRF model for segmenting spots from their background. The WMRF was compared against the conventional MRF and the Fuzzy C means (FCM) algorithms on simulated and real microarray images and their performances were evaluated by means of the segmentation matching factor (SMF) and the coefficient of determination (r2). Additionally, the WMRF was compared against the SPOT and SCANALYZE, and performances were evaluated by the mean absolute error (MAE) and the coefficient of variation (CV). The WMRF performed more accurately than the MRF and FCM (SMF: 92.66, 92.15, and 89.22, r2 : 0.92, 0.90, and 0.84, respectively) and achieved higher reproducibility than the MRF, SPOT, and SCANALYZE (MAE: 497, 1215, 1180, and 503, CV: 0.88, 1.15, 0.93, and 0.90, respectively). PMID:19783509

  8. Fourier-, Hilbert- and wavelet-based signal analysis: are they really different approaches?

    PubMed

    Bruns, Andreas

    2004-08-30

    Spectral signal analysis constitutes one of the most important and most commonly used analytical tools for the evaluation of neurophysiological signals. It is not only the spectral parameters per se (amplitude and phase) which are of interest, but there is also a variety of measures derived from them, including important coupling measures like coherence or phase synchrony. After reviewing some of these measures in order to underline the widespread relevance of spectral analysis, this report compares the three classical spectral analysis approaches: Fourier, Hilbert and wavelet transform. Recently, there seems to be increasing acceptance of the notion that Hilbert- or wavelet-based analyses be in some way superior to Fourier-based analyses. The present article counters such views by demonstrating that the three techniques are in fact formally (i.e. mathematically) equivalent when using the class of wavelets that is typically applied in spectral analyses. Moreover, spectral amplitude serves as an example to show that Fourier, Hilbert and wavelet analysis also yield equivalent results in practical applications to neuronal signals. PMID:15262077

  9. Wavelet-based detection of abrupt changes in natural frequencies of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, K.; Staszewski, W. J.; Basu, B.; Uhl, T.

    2015-12-01

    Detection of abrupt changes in natural frequencies from vibration responses of time-variant systems is a challenging task due to the complex nature of physics involved. It is clear that the problem needs to be analysed in the combined time-frequency domain. The paper proposes an application of the input-output wavelet-based Frequency Response Function for this analysis. The major focus and challenge relate to ridge extraction of the above time-frequency characteristics. It is well known that classical ridge extraction procedures lead to ridges that are smooth. However, this property is not desired when abrupt changes in the dynamics are considered. The methods presented in the paper are illustrated using simulated and experimental multi-degree-of-freedom systems. The results are compared with the classical Frequency Response Function and with the output only analysis based on the wavelet auto-power response spectrum. The results show that the proposed method captures correctly the dynamics of the analysed time-variant systems.

  10. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    PubMed

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824

  11. Estimation of mechanical properties of panels based on modal density and mean mobility measurements

    NASA Astrophysics Data System (ADS)

    Elie, Benjamin; Gautier, François; David, Bertrand

    2013-11-01

    The mechanical characteristics of wood panels used by instrument makers are related to numerous factors, including the nature of the wood or characteristic of the wood sample (direction of fibers, micro-structure nature). This leads to variations in Young's modulus, the mass density, and the damping coefficients. Existing methods for estimating these parameters are not suitable for instrument makers, mainly because of the need of expensive experimental setups, or complicated protocols, which are not adapted to a daily practice in a workshop. In this paper, a method for estimating Young's modulus, the mass density, and the modal loss factors of flat panels, requiring a few measurement points and an affordable experimental setup, is presented. It is based on the estimation of two characteristic quantities: the modal density and the mean mobility. The modal density is computed from the values of the modal frequencies estimated by the subspace method ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques), associated with the signal enumeration technique ESTER (ESTimation of ERror). This modal identification technique is proved to be robust in the low- and the mid-frequency domains, i.e. when the modal overlap factor does not exceed 1. The estimation of the modal parameters also enables the computation of the modal loss factor in the low- and the mid-frequency domains. An experimental fit with the theoretical expressions for the modal density and the mean mobility enables an accurate estimation of Young's modulus and the mass density of flat panels. A numerical and an experimental study show that the method is robust, and that it requires solely a few measurement points.

  12. Estimating bulk density of compacted grains in storage bins and modifications of Janssen's load equations as affected by bulk density.

    PubMed

    Haque, Ekramul

    2013-03-01

    Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities

  13. Estimating population density and connectivity of American mink using spatial capture-recapture

    USGS Publications Warehouse

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  14. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species. PMID:27509753

  15. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  16. An analytic model of toroidal half-wave oscillations: Implication on plasma density estimates

    NASA Astrophysics Data System (ADS)

    Bulusu, Jayashree; Sinha, A. K.; Vichare, Geeta

    2015-06-01

    The developed analytic model for toroidal oscillations under infinitely conducting ionosphere ("Rigid-end") has been extended to "Free-end" case when the conjugate ionospheres are infinitely resistive. The present direct analytic model (DAM) is the only analytic model that provides the field line structures of electric and magnetic field oscillations associated with the "Free-end" toroidal wave for generalized plasma distribution characterized by the power law ρ = ρo(ro/r)m, where m is the density index and r is the geocentric distance to the position of interest on the field line. This is important because different regions in the magnetosphere are characterized by different m. Significant improvement over standard WKB solution and an excellent agreement with the numerical exact solution (NES) affirms validity and advancement of DAM. In addition, we estimate the equatorial ion number density (assuming H+ atom as the only species) using DAM, NES, and standard WKB for Rigid-end as well as Free-end case and illustrate their respective implications in computing ion number density. It is seen that WKB method overestimates the equatorial ion density under Rigid-end condition and underestimates the same under Free-end condition. The density estimates through DAM are far more accurate than those computed through WKB. The earlier analytic estimates of ion number density were restricted to m = 6, whereas DAM can account for generalized m while reproducing the density for m = 6 as envisaged by earlier models.

  17. Variability of dental cone beam CT grey values for density estimations

    PubMed Central

    Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K

    2013-01-01

    Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537

  18. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  19. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  20. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  1. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651

  2. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level. PMID:26915200

  3. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  4. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  5. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  6. A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder

    NASA Astrophysics Data System (ADS)

    Schroth-Miller, Madeline L.

    Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.

  7. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    PubMed

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  8. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid

    PubMed Central

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  9. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.

    2009-01-01

    1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

  10. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.