Sample records for wavelet-based density estimation

  1. Wavelet-based density estimation and application to process monitoring

    SciTech Connect

    Safavi, A.A.; Chen, J.; Romagnoli, J.A. [Univ. of Sydney, New South Wales (Australia)] [Univ. of Sydney, New South Wales (Australia)

    1997-05-01

    The on-line monitoring and diagnosis of the process operating performance are extremely important parts of the strategies aimed at improving a process and the quality of its products in the long term. An application of wavelets and multiresolution analysis to density estimation and process monitoring is presented. Wavelet-based density-estimation techniques are developed as an alternative and superior method to other common density-estimation techniques. Also shown is the effectiveness of wavelet estimators when the observations are dependent. The resulting density estimators are then used in defining a normal operating region for the process under study so that any abnormal behavior by the process can be monitored. Results of applying these techniques to a typical multivariate chemical process are also presented.

  2. Wavelet-based density estimation for noise reduction in plasma simulations using particles

    SciTech Connect

    Nguyen van yen, Romain [Laboratoire de Meteorologie Dynamique-CNRL, Ecole Normale Superieure; Del-Castillo-Negrete, Diego B [ORNL; Schneider, Kai [Universite d'Aix-Marseille; Farge, Marie [Laboratoire de Meteorologie Dynamique-CNRL, Ecole Normale Superieure; Chen, Guangye [ORNL

    2010-01-01

    For given computational resources, one of the main limitations in the accuracy of plasma simulations using particles comes from the noise due to limited statistical sampling in the reconstruction of the particle distribution function. A method based on wavelet multiresolution analysis is proposed and tested to reduce this noise. The method, known as wavelet based density estimation (WBDE), was previously introduced in the statistical literature to estimate probability densities given a nite number of independent measurements. Its novel application to plasma simulations can be viewed as a natural extension of the nite size particles (FSP) approach, with the advantage of estimating more accurately distribution functions that have localized sharp features. The proposed method preserves the moments of the particle distribution function to a good level of accuracy, has no constraints on the dimensionality of the system, does not require an a priori selection of a global smoothing scale, and its able to adapt locally to the smoothness of the density based on the given discrete particle data. Most importantly, the computational cost of the denoising stage is of the same order as one timestep of a FSP simulation. The method is compared with a recently proposed proper orthogonal decomposition based method, and it is tested with particle data corresponding to strongly collisional, weakly collisional, and collisionless plasmas simulations.

  3. Wavelet Based Estimation for Univariate Stable Laws

    E-print Network

    Gonçalves, Paulo

    Wavelet Based Estimation for Univariate Stable Laws Anestis Antoniadis Laboratoire IMAG to implement. This article describes a fast, wavelet-based, regression-type method for estimating the parameters of a stable distribution. Fourier domain representations, combined with a wavelet multiresolution

  4. Wavelet-based texture measures for semicontinuous stand density estimation from very high resolution optical imagery

    NASA Astrophysics Data System (ADS)

    van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Wulf, Robert R.

    2011-01-01

    Stand density, expressed as the number of trees per unit area, is an important forest management parameter. It is used by foresters to evaluate regeneration, to assess the effect of forest management measures, or as an indicator variable for other stand parameters like age, basal area, and volume. In this work, a new density estimation procedure is proposed based on wavelet analysis of very high resolution optical imagery. Wavelet coefficients are related to reference densities on a per segment basis, using an artificial neural network. The method was evaluated on artificial imagery and two very high resolution datasets covering forests in Heverlee, Belgium and Les Beaux de Provence, France. Whenever possible, the method was compared with the well-known local maximum filter. Results show good correspondence between predicted and true stand densities. The average absolute error and the correlation between predicted and true density was 149 trees/ha and 0.91 for the artificial dataset, 100 trees/ha and 0.85 for the Heverlee site, and 49 trees/ha and 0.78 for the Les Beaux de Provence site. The local maximum filter consistently yielded lower accuracies, as it is essentially a tree localization tool, rather than a density estimator.

  5. Wavelet-Based Transistor Parameter Estimation

    Microsoft Academic Search

    Sudipta Majumdar; Harish Parthasarathy

    2010-01-01

    In this paper a wavelet-based parameter estimation method has been proposed for the common emitter transistor amplifier circuit\\u000a and compared with the least squares method. As the maximal precision of simulation requires the modeling of electronic circuits\\u000a in terms of device parameters and circuit components, the Volterra model of the common emitter amplifier circuit derived using\\u000a the Ebers–Moll model and

  6. Wavelet Based Transistor Parameter Estimation Using Second Order Volterra Model

    Microsoft Academic Search

    Sudipta Majumdar; Harish Parthasarathy

    In this paper, we present a wavelet based parameter estimation technique to estimate the transistor parameter in a common\\u000a emitter amplifier circuit. The method uses the closed form expression of the second order Volterra model of a common emitter\\u000a amplifier circuit derived using perturbation technique and the Ebers–Moll model. Simulations show that the proposed method\\u000a gives more accurate parameter estimation

  7. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  8. Wavelet-based image estimation: an empirical Bayes approach using Jeffrey's noninformative prior.

    PubMed

    Figueiredo, M T; Nowak, R D

    2001-01-01

    The sparseness and decorrelation properties of the discrete wavelet transform have been exploited to develop powerful denoising methods. However, most of these methods have free parameters which have to be adjusted or estimated. In this paper, we propose a wavelet-based denoising technique without any free parameters; it is, in this sense, a "universal" method. Our approach uses empirical Bayes estimation based on a Jeffreys' noninformative prior; it is a step toward objective Bayesian wavelet-based denoising. The result is a remarkably simple fixed nonlinear shrinkage/thresholding rule which performs better than other more computationally demanding methods. PMID:18255547

  9. A wavelet based image denoising using statistical sampler for Bayesian estimator

    Microsoft Academic Search

    B. A. Kumar; M. Srinivasan; S. Annadurai

    2003-01-01

    This paper presents a new wavelet based image denoising method, which includes a Bayesian framework and classical thresholding methods. The main goal here is computing for each wavelet coefficient the probability of being sufficiently clean. The three main novelties of our approach are: (1) estimating local regularity of an image and distinguishing between useful edges and noise; (2) initializing the

  10. Wavelet-based image estimation: an empirical Bayes approach using Jeffrey's noninformative prior

    Microsoft Academic Search

    Mário A. T. Figueiredo; Robert D. Nowak

    2001-01-01

    The sparseness and decorrelation properties of the discrete wavelet transform have been exploited to develop powerful denoising methods. However, most of these methods have free parameters which have to be adjusted or estimated. In this paper, we propose a wavelet-based denoising technique without any free parameters; it is, in this sense, a \\

  11. Wavelet-Based Semiblind Channel Estimation for Ultrawideband OFDM Systems

    Microsoft Academic Search

    S. M. S. Sadough; Mahieddine M. Ichir; Pierre Duhamel; Emmanuel Jaffrot

    2009-01-01

    Ultrawideband (UWB) communications involve very sparse channels, because the bandwidth increase results in a better time resolution. This property is used in this paper to propose an efficient algorithm that jointly estimates the channel and the transmitted symbols. More precisely, this paper introduces an expectation-maximization (EM) algorithm within a wavelet-domain Bayesian framework for semiblind channel estimation of multiband orthogonal frequency

  12. Multiscale Poisson Intensity and Density Estimation

    Microsoft Academic Search

    Rebecca M. Willett; Robert D. Nowak

    2007-01-01

    Abstract—The nonparametric Poisson intensity and density es- timation methods studied in this paper offer near minimax conver- gence rates for broad classes of densities and intensities with arbi- trary levels of smoothness. The methods and theory presented here share many of the desirable features associated with wavelet-based estimators: computational speed, spatial adaptivity, and the capa- bility of detecting discontinuities and

  13. Fetal QRS detection and heart rate estimation: a wavelet-based approach.

    PubMed

    Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

    2014-08-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. PMID:25070210

  14. Wavelet-based analysis of simulated network traffic

    Microsoft Academic Search

    Mitko Gospodinov; Evgeniya Gospodinova

    2006-01-01

    In the paper is applied wavelet-based Hurst parameter estimator for analysis of simulated network traffic, based on the fractional Gaussian noise. It was made a comparative analysis between the results obtained of wavelet-based estimator and wide using estimators as R\\/S statistic, variance-time plot and periodogram. The Hurst parameter obtained of wavelet-based estimator has the least value of relative inaccuracy compared

  15. Bounds for the covariance of functions of infinite variance stable random variables with applications to central limit theorems and wavelet-based estimation

    E-print Network

    Pipiras, Vladas; Abry, Patrice

    2007-01-01

    We establish bounds for the covariance of a large class of functions of infinite variance stable random variables, including unbounded functions such as the power function and the logarithm. These bounds involve measures of dependence between the stable variables, some of which are new. The bounds are also used to deduce the central limit theorem for unbounded functions of stable moving average time series. This result extends the earlier results of Tailen Hsing and the authors on central limit theorems for bounded functions of stable moving averages. It can be used to show asymptotic normality of wavelet-based estimators of the self-similarity parameter in fractional stable motions.

  16. Wavelet-based analysis and power law classification of C/NOFS high-resolution electron density data

    NASA Astrophysics Data System (ADS)

    Rino, C. L.; Carrano, C. S.; Roddy, Patrick

    2014-08-01

    This paper applies new wavelet-based analysis procedures to low Earth-orbiting satellite measurements of equatorial ionospheric structure. The analysis was applied to high-resolution data from 285 Communications/Navigation Outage Forecasting System (C/NOFS) satellite orbits sampling the postsunset period at geomagnetic equatorial latitudes. The data were acquired during a period of progressively intensifying equatorial structure. The sampled altitude range varied from 400 to 800 km. The varying scan velocity remained within 20° of the cross-field direction. Time-to-space interpolation generated uniform samples at approximately 8 m. A maximum segmentation length that supports stochastic structure characterization was identified. A two-component inverse power law model was fit to scale spectra derived from each segment together with a goodness-of-fit measure. Inverse power law parameters derived from the scale spectra were used to classify the scale spectra by type. The largest category was characterized by a single inverse power law with a mean spectral index somewhat larger than 2. No systematic departure from the inverse power law was observed to scales greater than 100 km. A small subset of the most highly disturbed passes at the lowest sampled altitudes could be categorized by two-component power law spectra with a range of break scales from less than 100 m to several kilometers. The results are discussed within the context of other analyses of in situ data and spectral characteristics used for scintillation analyses.

  17. Locally Adaptive Wavelet-Based Image Denoising using the Gram-Charlier Prior Function

    Microsoft Academic Search

    S. M. Mahbubur Rahman; M. Omair Ahmad; M. N. S. Swamy

    2007-01-01

    Statistical estimation techniques for the wavelet-based image denois- ing use suitable probability density functions (PDFs) as prior func- tions for the image coefficients. Due to the intrascale dependency of the local neighboring image wavelet coefficients, the prior func- tions are assumed to be stationary. In this paper, it is shown that the stationary Gram-Charlier (GC) PDF models the image coefficients

  18. A New Wavelet Based Image Denoising Method

    Microsoft Academic Search

    Jin Quan; William G. Wee; Chia Y. Han

    2012-01-01

    This paper proposes a new wavelet based image denoising method by using linear elementary parameterized denoising functions in the form of derivatives of Gaussian of a set of estimated wavelet coefficients. These coefficients are derived from an improved context modeling procedure in terms of mean square error estimation combining inter- and intra-sub band data. The denoising method results in a

  19. Template Learning from Atomic Representations: A Wavelet-based

    E-print Network

    Scott, Clayton

    and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to devise powerful compression, denoising and estimation methods [1]. Although wavelets provide sparse representations for many real world images, it is diÆcult to develop a wavelet-based statistical model for a given

  20. Information geometric density estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Marchand-Maillet, Stéphane

    2015-01-01

    We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.

  1. Wavelet Based Image Denoising Technique

    Microsoft Academic Search

    Dharmpal D Doye Sachin D Ruikar

    2011-01-01

    This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. Wavelet algorithms are useful tool for signal processing

  2. A Wavelet-Based Image Denoising Technique Using Spatial Priors

    Microsoft Academic Search

    Aleksandra Pizurica; Wilfried Philips; Ignace Lemahieu; Marc Acheroy

    2000-01-01

    We propose a new wavelet-based method for image denoising that applies the Bayesian framework, using prior knowledge about the spatial clustering of the wavelet coefficients. Local spatial interactions of the wavelet coefficients are modeled by adopting a Markov Random Field model. An iterative updating technique known as iterated conditional modes (ICM) is applied to estimate the binary masks containing the

  3. PARAMETER ESTIMATION FROM TIME-SERIES DATA WITH CORRELATED ERRORS: A WAVELET-BASED METHOD AND ITS APPLICATION TO TRANSIT LIGHT CURVES

    SciTech Connect

    Carter, Joshua A.; Winn, Joshua N., E-mail: carterja@mit.ed, E-mail: jwinn@mit.ed [Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2009-10-10

    We consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise. The noise is represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time, and another that has a power spectral density varying as 1/f{sup g}amma. We present an accurate and fast [O(N)] algorithm for parameter estimation based on computing the likelihood in a wavelet basis. The method is illustrated and tested using simulated time-series photometry of exoplanetary transits, with particular attention to estimating the mid-transit time. We compare our method to two other methods that have been used in the literature, the time-averaging method and the residual-permutation method. For noise processes that obey our assumptions, the algorithm presented here gives more accurate results for mid-transit times and truer estimates of their uncertainties.

  4. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  5. Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density p, allowing us to obtain the natural non-negative density representation (p)2. Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. PMID:18390355

  6. Wavelet-based morphological correlation

    NASA Astrophysics Data System (ADS)

    Wang, Qu; Chen, Li; Lei, Liang; Wang, Bo

    2010-10-01

    A wavelet-based morphological correlation (WBMC) is proposed as a new architecture to improve the properties of the classical morphological correlation (MC). For the WBMC, a dilated wavelet intensity function is introduced to filter the joint power spectrum (JPS) of the MC before final inverse Fourier transform. Computer simulation results show that, as compared with the linear correlation (LC), the conventional MC and the joint wavelet transform correlation (JWTC), the WBMC provides better discrimination capability with sharp and unmistakable correlation signal and its performance metrics are more stable under input outlier noise (salt-and-pepper noise). Although the WBMC loses illumination-invariance when input illumination factor is larger than unity, considerable discrimination capability is still maintained.

  7. Wavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression and Mathematical Analysis of Sign Entropy CodingAnalysi

    E-print Network

    Paris-Sud XI, Université de

    Wavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression.olivier@univ-poitiers.fr Abstract One of the advantages of the Discrete Wavelet Transform (DWT) compared to Fourier Transform (e

  8. Wavelet Based Methods inWavelet Based Methods in Image ProcessingImage Processing

    E-print Network

    Broughton, S. Allen

    1 Wavelet Based Methods inWavelet Based Methods in Image ProcessingImage Processing Applied 1997 · image processing course co-taught with Ed Doering Fall 1997 · attempt to make graduate level of TalksOutline of Talks · Lecture 1- Introduction to Image Processing, restoration, denoising, edge

  9. An Overview of Wavelet Based Multiresolution Analyses

    Microsoft Academic Search

    Björn Jawerth; Wim Sweldens

    1993-01-01

    In this paper we give an overview of some wavelet based multiresolution analyses. First, webriefly discuss the continuous wavelet transform in its simplest form. Then we give the definitionof multiresolution analysis and show how wavelets fit into it. We take a closer look at orthogonal,biorthogonal and semiorthogonal wavelets. The fast wavelet transform, wavelets on closed sets(boundary wavelets), multidimensional wavelets and

  10. Estimation of coastal density gradients

    NASA Astrophysics Data System (ADS)

    Howarth, M. J.; Palmer, M. R.; Polton, J. A.; O'Neill, C. K.

    2012-04-01

    Density gradients in coastal regions with significant freshwater input are large and variable and are a major control of nearshore circulation. However their measurement is difficult, especially where the gradients are largest close to the coast, with significant uncertainties because of a variety of factors - spatial and time scales are small, tidal currents are strong and water depths shallow. Whilst temperature measurements are relatively straightforward, measurements of salinity (the dominant control of spatial variability) can be less reliable in turbid coastal waters. Liverpool Bay has strong tidal mixing and receives fresh water principally from the Dee, Mersey, Ribble and Conwy estuaries, each with different catchment influences. Horizontal and vertical density gradients are variable both in space and time. The water column stratifies intermittently. A Coastal Observatory has been operational since 2002 with regular (quasi monthly) CTD surveys on a 9 km grid, an situ station, an instrumented ferry travelling between Birkenhead and Dublin and a shore-based HF radar system measuring surface currents and waves. These measurements are complementary, each having different space-time characteristics. For coastal gradients the ferry is particularly useful since measurements are made right from the mouth of Mersey. From measurements at the in situ site alone density gradients can only be estimated from the tidal excursion. A suite of coupled physical, wave and ecological models are run in association with these measurements. The models, here on a 1.8 km grid, enable detailed estimation of nearshore density gradients, provided appropriate river run-off data are available. Examples are presented of the density gradients estimated from the different measurements and models, together with accuracies and uncertainties, showing that systematic time series measurements within a few kilometres of the coast are a high priority. (Here gliders are an exciting prospect for detailed regular measurements to fill this gap.) The consequences for and sensitivity of circulation estimates are presented using both numerical and analytic models.

  11. Adaptive wavelet-based deconvolution method for remote sensing imaging.

    PubMed

    Zhang, Wei; Zhao, Ming; Wang, Zhile

    2009-08-20

    Fourier-based deconvolution (FoD) techniques, such as modulation transfer function compensation, are commonly employed in remote sensing. However, the noise is strongly amplified by FoD and is colored, thus producing poor visual quality. We propose an adaptive wavelet-based deconvolution algorithm for remote sensing called wavelet denoise after Laplacian-regularized deconvolution (WDALRD) to overcome the colored noise and to preserve the textures of the restored image. This algorithm adaptively denoises the FoD result on a wavelet basis. The term "adaptive" means that the wavelet-based denoising procedure requires no parameter to be estimated or empirically set, and thus the inhomogeneous Laplacian prior and the Jeffreys hyperprior are proposed. Maximum a posteriori estimation based on such a prior and hyperprior leads us to an adaptive and efficient nonlinear thresholding estimator, and therefore WDALRD is computationally inexpensive and fast. Experimentally, textures and edges of the restored image are well preserved and sharp, while the homogeneous regions remain noise free, so WDALRD gives satisfactory visual quality. PMID:19696869

  12. Wavelet based recognition for pulsar signals

    NASA Astrophysics Data System (ADS)

    Shan, H.; Wang, X.; Chen, X.; Yuan, J.; Nie, J.; Zhang, H.; Liu, N.; Wang, N.

    2015-06-01

    A signal from a pulsar can be decomposed into a set of features. This set is a unique signature for a given pulsar. It can be used to decide whether a pulsar is newly discovered or not. Features can be constructed from coefficients of a wavelet decomposition. Two types of wavelet based pulsar features are proposed. The energy based features reflect the multiscale distribution of the energy of coefficients. The singularity based features first classify the signals into a class with one peak and a class with two peaks by exploring the number of the straight wavelet modulus maxima lines perpendicular to the abscissa, and then implement further classification according to the features of skewness and kurtosis. Experimental results show that the wavelet based features can gain comparatively better performance over the shape parameter based features not only in the clustering and classification, but also in the error rates of the recognition tasks.

  13. Bayesian Wavelet-Based Image Denoising Using the Gauss-Hermite Expansion

    Microsoft Academic Search

    S. M. Mahbubur Rahman; M. Omair Ahmad; M. N. S. Swamy

    2008-01-01

    The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of

  14. GSAShrink: A Novel Iterative Approach for Wavelet-Based Image Denoising

    Microsoft Academic Search

    Alexandre L. M. Levada; Alberto Tannús; Nelson D. A. Mascarenhas

    2009-01-01

    In this paper we propose a novel iterative algo- rithm for wavelet-based image denoising following a Maximum a Posteriori (MAP) approach. The wavelet shrinkage problem is modeled according to the Bayesian paradigm, providing a strong and extremely flexible framework for solving general image denoising problems. To approximate the MAP estimator, we propose GSAShrink, a modified version of a known combi-

  15. Wavelet Based Image Denoising with A Mixture of Gaussian Distributions with Local Parameters

    Microsoft Academic Search

    H. Rabbani; M. Vafadoost; I. Selesnick

    2006-01-01

    The performance of various estimators, such as maximum a posteriori (MAP) is strongly dependent on correctness of the proposed model for noise-free data distribution. Therefore, the selection of a proper model for distribution of wavelet coefficients is very important in the wavelet based image denoising. This paper presents a new image denoising algorithm based on the modeling of wavelet coefficients

  16. WAVELET BASED IMAGE DENOISING BASED ON A MIXTURE OF LAPLACE DISTRIBUTIONS

    Microsoft Academic Search

    H. RABBANI; M. VAFADOOST

    The performance of various estimators, such as maximum a posteriori (MAP), strongly depends on correctness of the proposed model for distribution of noise-free data. Therefore, the selection of a proper model for the distribution of wavelet coefficients is very important in wavelet based image denoising. This paper presents a new image denoising algorithm based on the modeling of wavelet coefficients

  17. A wavelet based technique for suppression of EMG noise and motion artifact in ambulatory ECG

    Microsoft Academic Search

    P. Mithun; Prem C. Pandey; Toney Sebastian; Prashant Mishra; Vinod K. Pandey

    2011-01-01

    A wavelet-based denoising technique is investigated for suppressing EMG noise and motion artifact in ambulatory ECG. EMG noise is reduced by thresholding the wavelet coefficients using an improved thresholding function combining the features of hard and soft thresholding. Motion artifact is reduced by limiting the wavelet coefficients. Thresholds for both the denoising steps are estimated using the statistics of the

  18. Wavelet-based ultrasound image denoising: performance analysis and comparison.

    PubMed

    Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

  19. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  20. Improved Astronomical Inferences via Nonparametric Density Estimation

    E-print Network

    Wolfe, Patrick J.

    Improved Astronomical Inferences via Nonparametric Density Estimation Chad M. Schafer, InCA Group. Richards 2 #12;Motivation Theory predicts the distribution of observables as a function of cosmological difficult to justify. Nonparametric density estimation drops these restrictions 8 #12;Nonparametric Density

  1. Adaptively wavelet-based image denoising algorithm with edge preserving

    NASA Astrophysics Data System (ADS)

    Tan, Yihua; Tian, Jinwen; Liu, Jian

    2006-02-01

    A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband. Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

  2. QUANTIFYING DEMOCRACY OF WAVELET BASES IN LORENTZ SPACES

    E-print Network

    Hernández, Eugenio

    QUANTIFYING DEMOCRACY OF WAVELET BASES IN LORENTZ SPACES EUGENIO HERN´ANDEZ, JOS´E MAR´IA MARTELL it is interesting to ask how far wavelet bases are from being democratic in Lp,q (Rd ), p = q. To quantify democracy

  3. Performance Factors Analysis of a Wavelet-based Watermarking Method

    Microsoft Academic Search

    Chaw-seng Woo; Jiang Du; Binh Pham

    2005-01-01

    The essential performance metrics of a robust watermark include robustness, imperceptibility, watermark capacity and security. In addition, computational cost is important for practicality. Wavelet-based image watermarking methods exploit the frequency information and spatial information of the transformed data in multiple resolutions to gain robustness. Although the Human Visual System (HVS) model offers imperceptibility in wavelet-based watermarking, it suffers high computational

  4. Wavelet-Based Bootstrapping for Non-Gaussian Time Series

    E-print Network

    Percival, Don

    Wavelet-Based Bootstrapping for Non-Gaussian Time Series Don Percival Applied Physics Laboratory time series) · review one wavelet-based approach to bootstrapping (Percival, Sardy and Davison, 2001-Gaussian time series · demonstrate methodology on time series related to BMW stock · conclude with some remarks

  5. Wavelet-Based Bootstrapping for Non-Gaussian Time Series

    E-print Network

    Percival, Don

    Wavelet-Based Bootstrapping for Non-Gaussian Time Series Don Percival Applied Physics Laboratory the sampling variability in statistics computed from a time series X0, X1, . . . , XN-1? · start with some correlated time series) · review one wavelet-based approach to bootstrapping (Percival, Sardy and Davison

  6. Wavelet-Based Bootstrapping for Non-Gaussian Time Series

    E-print Network

    Percival, Don

    Wavelet-Based Bootstrapping for Non-Gaussian Time Series Don Percival Applied Physics Laboratory the sampling variability in statistics computed from a time series X0, X1, . . . , XN-1? · start with some correlated time series) · review previously proposed wavelet-based approach to boot- strapping (Percival

  7. AN OVERVIEW OF WAVELET BASED MULTIRESOLUTION ANALYSES \\Lambda

    E-print Network

    Sweldens, Wim

    AN OVERVIEW OF WAVELET BASED MULTIRESOLUTION ANALYSES \\Lambda BJ ¨ ORN JAWERTH yz AND WIM SWELDENS yx Abstract. In this paper we present an overview of wavelet based multiresolution analyses. First the definition of a multiresolution analysis and show how wavelets fit into it. We take a closer look

  8. Discrete Probability Density Estimation Using Multirate DSP Models P. P. Vaidyanathan and Byung-Jun Yoon

    E-print Network

    Yoon, Byung-Jun

    from wavelet based appoaches is also indicated where appropriate. In the final form, the prob- ability the probability density func- tion (pdf) of a continuous random variable v from mea- surements has been on multirate filters and filter banks and demonstrate their advanvtages. The analogy to continuous models

  9. Comparative Analysis of Wavelet-Based Scale-Invariant Feature Extraction Using Different Wavelet Bases

    Microsoft Academic Search

    Joohyun Lim; Youngouk Kim; Joonki Paik

    2009-01-01

    \\u000a In this paper, we present comparative analysis of scale-invariant feature extraction using different wavelet bases. The main\\u000a advantage of the wavelet transform is the multi-resolution analysis. Furthermore, wavelets enable localigation in both space\\u000a and frequency domains and high-frequency salient feature detection. Wavelet transforms can use various basis functions. This\\u000a research aims at comparative analysis of Daubechies, Haar and Gabor wavelets

  10. Density Estimation with Replicate Heteroscedastic Measurements.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-02-01

    We present a deconvolution estimator for the density function of a random variable from a set of independent replicate measurements. We assume that measurements are made with normally distributed errors having unknown and possibly heterogeneous variances. The estimator generalizes the deconvoluting kernel density estimator of Stefanski and Carroll (1990), with error variances estimated from the replicate observations. We derive expressions for the integrated mean squared error and examine its rate of convergence as n ? ? and the number of replicates is fixed. We investigate the finite-sample performance of the estimator through a simulation study and an application to real data. PMID:21311734

  11. Embedded wavelet-based compression of hyperspectral imagery using tarp coding

    Microsoft Academic Search

    Yonghui Wang; Justin T. Rucker; James E. Fowler

    2003-01-01

    ó An embedded wavelet-based coder for the com- pression of hyperspectral imagery is described. The proposed coder, 3D tarp, employs an explicit estimate of the probability of coefcient signicance to drive a nonadaptive arithmetic coder, resulting in a simple implementation suited to resource- limited on-board processing. The performance of the proposed 3D tarp coder is compared to that of other

  12. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  13. Wavelet-based fingerprint image retrieval

    NASA Astrophysics Data System (ADS)

    Montoya Zegarra, Javier A.; Leite, Neucimar J.; da Silva Torres, Ricardo

    2009-05-01

    This paper presents a novel approach for personal identification based on a wavelet-based fingerprint retrieval system which encompasses three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing. We propose the use of different types of Wavelets for representing and describing the textural information presented in fingerprint images in a compact way. For that purpose, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the wavelet domain. These feature vectors are used both to retrieve the most similar fingerprints, given a query image, and their indexation is used to reduce the search spaces of candidate images. The different types of Wavelets used in our study include: Gabor wavelets, tree-structured wavelet decomposition using both orthogonal and bi-orthogonal filter banks, as well as the steerable wavelets. To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were considered. We also took into account different combinations of the above wavelets with six similarity measures. The results show that the Gabor wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.

  14. Comparative Analysis of Wavelet-Based Scale-Invariant Feature Extraction Using Different Wavelet Bases

    NASA Astrophysics Data System (ADS)

    Lim, Joohyun; Kim, Youngouk; Paik, Joonki

    In this paper, we present comparative analysis of scale-invariant feature extraction using different wavelet bases. The main advantage of the wavelet transform is the multi-resolution analysis. Furthermore, wavelets enable localigation in both space and frequency domains and high-frequency salient feature detection. Wavelet transforms can use various basis functions. This research aims at comparative analysis of Daubechies, Haar and Gabor wavelets for scale-invariant feature extraction. Experimental results show that Gabor wavelets outperform better than Daubechies, Haar wavelets in the sense of both objective and subjective measures.

  15. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

  16. ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

    EPA Science Inventory

    An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

  17. A closed form solution of MMSE using multivariate radial-exponential priors for wavelet-based image denoising

    Microsoft Academic Search

    P. Kittisuwan; W. Asdornwised

    2008-01-01

    The Performance of various estimators, such as minimum mean square error (MMSE) is strongly dependent on correctness of the proposed model for original data distribution. Therefore, the selection of a proper model for distribution of wavelet coefficients is important in wavelet based image denoising. This paper presents a new image denoising algorithm based on the modeling of wavelet coefficients in

  18. A wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Wang, Yuan 1Ji, Yanju 2Li, Suyi 13Lin, Jun 12Zhou, Fengdao 1Yang, Guihong

    2013-09-01

    A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an effective, practical method to remove the baseline drift of GREATEM signals and its performance is significantly superior to the interpolation method.

  19. Quantum statistical inference for density estimation

    SciTech Connect

    Silver, R.N.; Martz, H.F.; Wallstrom, T.

    1993-11-01

    A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inverse problems; the penalty function is a convex information divergence enforcing global smoothing toward default models, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing may be enforced by constraints on the expectation values of differential operators. Although the hyperparameters, covariance, and linear response to perturbations can be estimated by a variety of statistical methods, we develop the Bayesian interpretation. The linear response of the MAP estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood. The method is demonstrated on standard data sets.

  20. Wavelet-based approach to character skeleton.

    PubMed

    You, Xinge; Tang, Yuan Yan

    2007-05-01

    Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform. PMID:17491454

  1. Density estimation by maximum quantum entropy

    SciTech Connect

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-11-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets.

  2. Estimating density of Florida Key deer 

    E-print Network

    Roberts, Clay Walton

    2006-08-16

    Florida Key deer (Odocoileus virginianus clavium) were listed as endangered by the U.S. Fish and Wildlife Service (USFWS) in 1967. A variety of survey methods have been used in estimating deer density and/or changes in population trends...

  3. Estimating density of Florida Key deer

    E-print Network

    Roberts, Clay Walton

    2006-08-16

    Florida Key deer (Odocoileus virginianus clavium) were listed as endangered by the U.S. Fish and Wildlife Service (USFWS) in 1967. A variety of survey methods have been used in estimating deer density and/or changes in population trends...

  4. Quantum Computation Based Probability Density Function Estimation

    Microsoft Academic Search

    Ferenc Balázs; Sándor Imre

    2004-01-01

    Signal processing techniques will lean on blind methods in the near future, where no redundant, resource allocating information will be transmitted through the channel. To achieve a proper decision, however, it is essential to know at least the probability density function (pdf), which to estimate is classically a time consumption and\\/or less accurate hard task, that may make decisions to

  5. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  6. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  7. DENSITY ESTIMATION FOR PROJECTED EXOPLANET QUANTITIES

    SciTech Connect

    Brown, Robert A., E-mail: rbrown@stsci.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2011-05-20

    Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of 'projected' mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth {sigma}. We calibrate the dependence of {sigma} on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n{sup -log2}. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements-particularly on the minimum sample size n-for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

  8. Stochastic model for estimation of environmental density

    SciTech Connect

    Janardan, K.G.; Uppuluri, V.R.R.

    1984-01-01

    The environment density has been defined as the value of a habitat expressing its unfavorableness for settling of an individual which has a strong anti-social tendency to other individuals in an environment. Morisita studied anti-social behavior of ant-lions (Glemuroides japanicus) and provided a recurrence relation without an explicit solution for the probability distribution of individuals settling in each of two habitats in terms of the environmental densities and the numbers of individuals introduced. In this paper the recurrence relation is explicitly solved; certain interesting properties of the distribution are discussed including the estimation of the parameters. 4 references, 1 table.

  9. Quantum Computation Based Probability Density Function Estimation

    E-print Network

    Balázs, F; Bal\\'azs, Ferenc

    2004-01-01

    Signal processing techniques will lean on blind methods in the near future, where no redundant, resource allocating information will be transmitted through the channel. To achieve a proper decision, however, it is essential to know at least the probability density function (pdf), which to estimate is classically a time consumption and/or less accurate hard task, that may make decisions to fail. This paper describes the design of a quantum assisted pdf estimation method also by an example, which promises to achieve the exact pdf by proper setting of parameters in a very fast way.

  10. Quantum Computation Based Probability Density Function Estimation

    E-print Network

    Ferenc Balázs; Sándor Imre

    2004-09-06

    Signal processing techniques will lean on blind methods in the near future, where no redundant, resource allocating information will be transmitted through the channel. To achieve a proper decision, however, it is essential to know at least the probability density function (pdf), which to estimate is classically a time consumption and/or less accurate hard task, that may make decisions to fail. This paper describes the design of a quantum assisted pdf estimation method also by an example, which promises to achieve the exact pdf by proper setting of parameters in a very fast way.

  11. Density Estimation Framework for Model Error Assessment

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.

    2014-12-01

    In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.

  12. Wavelet-based ultrasound image denoising: Performance analysis and comparison

    Microsoft Academic Search

    F. Yousefi Rizi; H. Ahmadi Noubari; S. K. Setarehdan

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented

  13. Adaptively wavelet-based image denoising algorithm with edge preserving

    Microsoft Academic Search

    Yihua Tan; Jinwen Tian; Jian Liu

    2006-01-01

    A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband. Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated

  14. Efficient Signal Transmission and Wavelet-based Compression

    E-print Network

    Kelly, Susan

    Efficient Signal Transmission and Wavelet-based Compression Susan E. Kelly Department of Wisconsin­Eau Claire Abstract In this paper we describe how digital signals can be transmitted by analog carrier signals. This is the method of pulse code modulation. Transmission by pulse code modulation

  15. Perceptually lossless wavelet-based compression for medical images

    Microsoft Academic Search

    Nai-Wen Lin; Tsaifa Yu; Andrew K. Chan

    1997-01-01

    In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression

  16. An EM algorithm for wavelet-based image restoration

    Microsoft Academic Search

    Mário A. T. Figueiredo; Robert D. Nowak

    2003-01-01

    Abstract: This paper introduces an expectation--maximization(EM) algorithm for image restoration (deconvolution) based on apenalized likelihood formulated in the wavelet domain. Regularizationis achieved by promoting a reconstruction with low-complexity,expressed in the wavelet coefficients, taking advantage ofthe well known sparsity of wavelet representations. Previous workshave investigated wavelet-based restoration but, except for certainspecial cases, the resulting criteria are solved...

  17. Wavelet-Based Transformations for Nonlinear Signal Processing

    E-print Network

    Nowak, Robert

    Wavelet-Based Transformations for Nonlinear Signal Processing Robert D. Nowak Department.dsp.rice.edu Submitted to IEEE Transactions on Signal Processing, February 1997 Revised June 1998 EDICS Numbers: SP-2 in the analysis and processing of real-world signals. In this paper, we introduce two new structures for nonlinear

  18. Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures

    E-print Network

    Percival, Don

    Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures Don Percival Applied monitoring program recently upgraded with perma- nent installation of vertical profilers at Lake Wivenhoe dam in a subtropical dam as a function of time and depth · will concentrate on a 600+ day segment of temperature fluc

  19. Wavelet-based feature extraction technique for fruit shape classification

    Microsoft Academic Search

    Slamet Riyadi; A. J. Ishak; M. M. Mustafa; A. Hussain

    2008-01-01

    For export, papaya fruit should be free of defects and damages. Abnormality in papaya fruit shape represents a defective fruit and is used as one of the main criteria to determine suitability of the fruit to be exported. This paper describes a wavelet-based technique used to perform feature extraction to extract unique features which are then used in the classification

  20. A NEW WAVELET--BASED EDGE DETECTOR VIA CONSTRAINED OPTIMIZATION

    E-print Network

    Chen, Sheng-Wei

    , a continuous wavelet has to be converted into the discrete form. For this purpose, the format of the discrete wavelet transform has to be developed. Since the proposed edge filter is wavelet--based, the inherent multi­ resolution nature of wavelet transform provides more flexibility in the analysis of images. Also

  1. A new wavelet based blind audio source separation using Kurtosis

    Microsoft Academic Search

    M. R. Mirarab; M. A. Sobhani; A. A. Nasiri

    2010-01-01

    We consider the problem of blind audio source separation. A method to solve this problem is blind source separation (BSS) using independent component analysis (ICA). ICA exploits the non-Gaussianity of source in the mixtures. In this paper we propose a new wavelet based ICA method using Kurtosis for blind audio source separation. In this method, the observations are transformed into

  2. Wavelet-Based Synthesis of a Multifractional Process

    Microsoft Academic Search

    Zhao-rui Wang; Shan-wei Lü; Taketsune Nakamura

    2008-01-01

    The multifractional process is introduced as a natural extension of traditional monofractional process. The selling point of multifractional process is that its Hölder regularity is allowed to vary from point to point, such that makes it a promising model for those stochastic processes whose regularity evolves in time. A new wavelet-based algorithm to synthesize a realization of multifractional process is

  3. Wavelet-based Prediction Measures for Lossy Image Set Compression

    E-print Network

    Cheng, Howard

    compression, wavelet transform, prediction measures 1. Introduction Traditional image compression algorithms set compression algorithms have been proposed in the literature. A key component of these algorithms of a compression algorithm. Since most of these image set compression algorithms use wavelet-based image

  4. Wavelet Based Instantaneous Power Analysis for Induction Machine Fault Diagnosis

    Microsoft Academic Search

    Shahin Hedayati Kia; A. Mpanda Mabwe; H. Henao; G.-A. Capolino

    2006-01-01

    The aim of this paper is to present a wavelet based approach to detect broken bar faults in squirrel-cage induction machines. This approach uses instantaneous power as a medium for fault detection. A multi-resolution instantaneous power decomposition based on wavelet transform provides the different frequency bands whose energies are affected directly by the broken bar fault. Actually, the instantaneous power

  5. A tree projection algorithm for wavelet-based

    E-print Network

    Thompson, Andrew

    A tree projection algorithm for wavelet-based sparse approximation Andrew Thompson Duke University, North Carolina, USA joint with Coralia Cartis (University of Edinburgh) #12;Wavelet trees · Discrete wavelet transforms (DWTs) have an inherent tree structure #12;Wavelet trees · Discrete wavelet transforms

  6. Wavelet-based analysis of blood pressure dynamics in rats

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Anisimov, A. A.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznecova, A. S.; Matasova, E. G.

    2009-02-01

    Using a wavelet-based approach, we study stress-induced reactions in the blood pressure dynamics in rats. Further, we consider how the level of the nitric oxide (NO) influences the heart rate variability. Clear distinctions for male and female rats are reported.

  7. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

  8. A comparative evaluation of wavelet-based methods for hypothesis testing of brain activation maps.

    PubMed

    Fadili, M J; Bullmore, E T

    2004-11-01

    Wavelet-based methods for hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasise convergence between methods of wavelet thresholding or shrinkage and the problem of hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on the trade-off between type I probability error control and power dissipation, estimated by the area under the ROC curve. We describe a technique for controlling the false discovery rate at an arbitrary level of error in testing multiple wavelet coefficients generated by a 2D discrete wavelet transform (DWT) of spatial maps of fMRI time series statistics. We also describe and apply change-point detection with recursive hypothesis testing methods that can be used to define a threshold unique to each level and orientation of the 2D-DWT, and Bayesian methods, incorporating a formal model for the anticipated sparseness of wavelet coefficients representing the signal or true image. The sensitivity and type I error control of these algorithms are comparatively evaluated by analysis of "null" images (acquired with the subject at rest) and an experimental data set acquired from five normal volunteers during an event-related finger movement task. We show that all three wavelet-based algorithms have good type I error control (the FDR method being most conservative) and generate plausible brain activation maps (the Bayesian method being most powerful). We also generalise the formal connection between wavelet-based methods for simultaneous multiresolution denoising/hypothesis testing and methods based on monoresolution Gaussian smoothing followed by statistical testing of brain activation maps. PMID:15528111

  9. Wavelet-based adaptive image denoising with edge preservation

    Microsoft Academic Search

    Charles Q. Zhan; Lina J. Kararn

    2003-01-01

    This paper presents a state-of-the-art adaptive wavelet-based denoising method with edge preservation. More specifically, a redundant discrete dyadic wavelet transform (DDWT) is performed on the noisy image to get the wavelet frame decomposition at different scales. Based on the Lip-schitz regularity theory, correlation analysis across scales is performed to detect the significant coefficients from the signal and the insignificant coefficients

  10. Wavelet based motion artifact removal for Functional Near Infrared Spectroscopy

    Microsoft Academic Search

    Behnam Molavi; Guy A. Dumont

    2010-01-01

    Functional Near Infrared Spectroscopy (fNIRS) enables researchers to conduct studies in situations where use of other functional imaging methods is impossible. An important shortcoming of fNIRS is the sensitivity to motion artifacts. We propose a new wavelet based algorithm for removing movement artifacts from fNIRS signals. We tested the method on simulated and experimental fNIRS data. The results show an

  11. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  12. Wavelet-based snake model for image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-wei; Liu, Zheng-guang

    2005-10-01

    Although the snake model has been widely used nowadays and obtained quite good results, there are still some key difficulties with it: the narrow capture range and the disability to move into boundary concavities. A new snake model, Gradient Vector Flow snake, can overcome this difficulty. GVF snake model creates its own external force field called GVF force field, this make it insensitive to the initialization and able to move into concave boundary regions. However, GVF snake need large amount of computation and is easily interfered by noise. Accordingly, the wavelet-based GVF snake model can lessen the amount of computation because the multi-scale character of wavelet transform. Due to the different singularities of signal and noise, the module local maxima of their wavelet coefficients vary in different way in multi resolution, so noise can also be distinguished from signal with wavelet-based GVF snake model. The wavelet-based GVF snake model is more quickly and robust contrast to traditional snake model.

  13. Nonparametric volatility density estimation for discrete time models

    Microsoft Academic Search

    Bert Van Es; Peter Spreij; Harry Van Zanten

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed to estimate the volatility density. Expansions of the bias and bounds on the variance

  14. ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER

    EPA Science Inventory

    This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...

  15. A Wavelet-Based Assessment of Topographic-Isostatic Reductions for GOCE Gravity Gradients

    NASA Astrophysics Data System (ADS)

    Grombein, Thomas; Luo, Xiaoguang; Seitz, Kurt; Heck, Bernhard

    2014-07-01

    Gravity gradient measurements from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE) contain significant high- and mid-frequency signal components, which are primarily caused by the attraction of the Earth's topographic and isostatic masses. In order to mitigate the resulting numerical instability of a harmonic downward continuation, the observed gradients can be smoothed with respect to topographic-isostatic effects using a remove-compute-restore technique. For this reason, topographic-isostatic reductions are calculated by forward modeling that employs the advanced Rock-Water-Ice methodology. The basis of this approach is a three-layer decomposition of the topography with variable density values and a modified Airy-Heiskanen isostatic concept incorporating a depth model of the Mohorovi?i? discontinuity. Moreover, tesseroid bodies are utilized for mass discretization and arranged on an ellipsoidal reference surface. To evaluate the degree of smoothing via topographic-isostatic reduction of GOCE gravity gradients, a wavelet-based assessment is presented in this paper and compared with statistical inferences in the space domain. Using the Morlet wavelet, continuous wavelet transforms are applied to measured GOCE gravity gradients before and after reducing topographic-isostatic signals. By analyzing a representative data set in the Himalayan region, an employment of the reductions leads to significantly smoothed gradients. In addition, smoothing effects that are invisible in the space domain can be detected in wavelet scalograms, making a wavelet-based spectral analysis a powerful tool.

  16. Multiresolution seismic data fusion with a generalized wavelet-based method to derive subseabed acoustic properties

    NASA Astrophysics Data System (ADS)

    Ker, S.; Le Gonidec, Y.; Gibert, D.

    2013-11-01

    In the context of multiscale seismic analysis of complex reflectors, that takes benefit from broad-band frequency range considerations, we perform a wavelet-based method to merge multiresolution seismic sources based on generalized Lévy-alpha stable functions. The frequency bandwidth limitation of individual seismic sources induces distortions in wavelet responses (WRs), and we show that Gaussian fractional derivative functions are optimal wavelets to fully correct for these distortions in the merged frequency range. The efficiency of the method is also based on a new wavelet parametrization, that is the breadth of the wavelet, where the dominant dilation is adapted to the wavelet formalism. As a first demonstration to merge multiresolution seismic sources, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed device and we show that both can now be perfectly merged into an equivalent seismic source with a broad-band frequency bandwidth (220-2200 Hz). Taking advantage of this new multiresolution seismic data fusion, the potential of the generalized wavelet-based method allows reconstructing the acoustic impedance profile of the subseabed, based on the inverse wavelet transform properties extended to the source-corrected WR. We highlight that the fusion of seismic sources improves the resolution of the impedance profile and that the density structure of the subseabed can be assessed assuming spatially homogeneous large scale features of the subseabed physical properties.

  17. Wavelet based characterization of ex vivo vertebral trabecular bone structure with 3T MRI compared to microCT

    SciTech Connect

    Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

    2005-04-11

    Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

  18. Estimates of lightning ground flash density using optical transient density

    Microsoft Academic Search

    William A. Chisholm

    2003-01-01

    The NASA optical transient detector (OTD) project has recently concluded. Several changes to data processing have improved the agreement between OTD values and ground flash density (GFD) in South America while preserving agreement with strong ground flash density trends observed in North America, South Africa and other regions. A revised relationship between OTD and GFD values is recommended.

  19. Template-Free Wavelet-Based Detection of Local Symmetries.

    PubMed

    Puspoki, Zsuzsanna; Unser, Michael

    2015-10-01

    Our goal is to detect and group different kinds of local symmetries in images in a scale- and rotation-invariant way. We propose an efficient wavelet-based method to determine the order of local symmetry at each location. Our algorithm relies on circular harmonic wavelets which are used to generate steerable wavelet channels corresponding to different symmetry orders. To give a measure of local symmetry, we use the F-test to examine the distribution of the energy across different channels. We provide experimental results on synthetic images, biological micrographs, and electron-microscopy images to demonstrate the performance of the algorithm. PMID:26011883

  20. Modified wavelet based morphological correlation for rotation-invariant recognition

    NASA Astrophysics Data System (ADS)

    Wang, Qu; Chen, Li; Zhou, Jinyun; Lin, Qinghua

    2011-11-01

    A wavelet-based rotation invariant morphological correlation (WBRIMC) is proposed as a new architecture to improve the properties of the classical rotation invariant morphological correlation (RIMC). For the WBRIMC, the JPS of the RIMC is filtered by an appropriately dilated power spectrum function of wavelet. Simulation results confirm that the WBRIMC has higher discrimination capability with sharp and intense correlation signals, and is more tolerant to the salt-and-pepper noise and white additive Gaussian noise than is the Circular harmonic filter (CHF), the phase only CHF (POCHF) and the RIMC.

  1. Force Estimation and Prediction from Time-Varying Density Images

    E-print Network

    Ratilal, Purnima

    We present methods for estimating forces which drive motion observed in density image sequences. Using these forces, we also present methods for predicting velocity and density evolution. To do this, we formulate and apply ...

  2. A wavelet-based approach to fall detection.

    PubMed

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the "prototype fall".In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  3. A Wavelet-Based Approach to Fall Detection

    PubMed Central

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  4. Wavelet-Based Poisson Solver for Use in Particle-in-Cell Simulations Balsa Terzic

    E-print Network

    Terzi, Bal?a

    Wavelet-Based Poisson Solver for Use in Particle-in-Cell Simulations Balsa Terzi´c Department implementation of a wavelet-based Poisson solver for use in 3D particle- in-cell simulations. Our method harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets

  5. Wavelet based multiresolution expectation maximization image reconstruction algorithm for positron emission tomography

    E-print Network

    Raheja, Amar

    Wavelet based multiresolution expectation maximization image reconstruction algorithm for positron. This work transforms the MGEM and MREM algorithm to a Wavelet based Multiresolution EM (WMREM) algorithm by performing a 2D-wavelet transform on the acquired tube data that is used to reconstruct images at different

  6. Wavelet-based spatial and temporal multiscaling: Bridging the atomistic and continuum space and time scales

    E-print Network

    Deymier, Pierre

    Wavelet-based spatial and temporal multiscaling: Bridging the atomistic and continuum space received 4 April 2003; published 25 July 2003 A wavelet-based multiscale methodology is presented atomistic-continuum models and wavelet analysis. An atomistic one-dimensional harmonic crystal is coupled

  7. A WAVELET-BASED IMAGE DENOISING TECHNIQUE USING SPATIAL PRIORS Aleksandra PIZURICA 1

    E-print Network

    Pizurica, Aleksandra

    A WAVELET-BASED IMAGE DENOISING TECHNIQUE USING SPATIAL PRIORS Aleksandra PIZURICA 1 , Wilfried, Belgium ABSTRACT We propose a new wavelet-based method for image denoising that applies the Bayesian framework, using prior knowledge about the spatial clustering of the wavelet coefficients. Local spatial

  8. Wavelet-Based Neural Pattern Analyzer for Behaviorally Significant Burst Pattern Recognition

    E-print Network

    Bhunia, Swarup

    Wavelet-Based Neural Pattern Analyzer for Behaviorally Significant Burst Pattern Recognition-bandwidth link. We present a novel wavelet- based approach for detecting spikes, grouping them as bursts-resolution wavelet analysis [4] of recorded signals to de-noise the data, detect and sort spikes and then determine

  9. WAVELET-BASED DENOISING USING HIDDEN MARKOV MODELS M. Jaber Borran and Robert D. Nowak

    E-print Network

    WAVELET-BASED DENOISING USING HIDDEN MARKOV MODELS M. Jaber Borran and Robert D. Nowak ECE in a wide variety of wavelet- based statistical signal processing applications. Typically, Gaus- sian mixture distributions are used to model the wavelet coeffi- cients and the correlation between

  10. Democracy functions of wavelet bases in general Lorentz Gustavo Garrigos, Eugenio Hernandez, Maria de Natividade

    E-print Network

    Hernández, Eugenio

    Democracy functions of wavelet bases in general Lorentz spaces Gustavo Garrig´os, Eugenio Hern´andez, Maria de Natividade Abstract We compute the democracy functions associated with wavelet bases in general and upper democracy functions; that is, h (N) = inf #=N Q Q Q X and hr(N) = sup #=N Q Q Q X , (1.2) where {Q

  11. Wavelet-based index of magnetic storm activity P. Kokoszka,1

    E-print Network

    Kokoszka, Piotr

    Wavelet-based index of magnetic storm activity A. Jach,1 P. Kokoszka,1 J. Sojka,2 and L. Zhu2] A wavelet-based method of computing an index of storm activity is put forward. The new index can be computed of geomagnetic storm events and requires only the most recent magnetogram records, e.g., the 2 months including

  12. ADAPTIVE WAVELET-BASED FAMILY TREE QUANTIZATION FOR DIGITAL IMAGE WATERMARKING

    E-print Network

    Qi, Xiaojun

    ADAPTIVE WAVELET-BASED FAMILY TREE QUANTIZATION FOR DIGITAL IMAGE WATERMARKING Boyd Mc@cc.usu.edu ABSTRACT This paper presents an adaptive wavelet-based blind digital watermarking scheme for copyright are grouped into family trees. The watermark is embedded by quantizing the family trees. The trees

  13. Category Theory Approach to Fusion of Wavelet-Based Features Scott A. DeLoach

    E-print Network

    Deloach, Scott A.

    Category Theory Approach to Fusion of Wavelet-Based Features Scott A. DeLoach Air Force Institute of Technology Department of Electrical and Computer Engineering Wright-Patterson AFB, Ohio 45433 Scott the application of category theory to a wavelet based multisensor target recognition system, the Automatic

  14. Page 1 of 9 Category Theory Approach to Fusion of Wavelet-Based Features

    E-print Network

    Kokar, Mieczyslaw M.

    Page 1 of 9 Category Theory Approach to Fusion of Wavelet-Based Features Scott A. DeLoach Air Force Scott.DeLoach@afit.af.mil Mieczyslaw M. Kokar Northeastern University Department of Electrical theory, the paper investigates the application of category theory to a wavelet based multisensor target

  15. PERSPECTIVES Estimating delayed density-dependent mortality

    E-print Network

    Myers, Ransom A.

    for delayed density dependence using 34 time series of sockeye data. We found no consistent evidence approche méta-analytique pour vérifier cette hypothèse à partir de 34 séries chronologiques de données sur

  16. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  17. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W. [Louisiana State University; Ting, C.F. [Louisiana State University; Qu, Jun [ORNL; Blau, Peter Julian [ORNL

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  18. A second generation wavelet based finite elements on triangulations

    NASA Astrophysics Data System (ADS)

    Quraishi, S. M.; Sandeep, K.

    2011-08-01

    In this paper we have developed a second generation wavelet based finite element method for solving elliptic PDEs on two dimensional triangulations using customized operator dependent wavelets. The wavelets derived from a Courant element are tailored in the second generation framework to decouple some elliptic PDE operators. Starting from a primitive hierarchical basis the wavelets are lifted (enhanced) to achieve local scale-orthogonality with respect to the operator of the PDE. The lifted wavelets are used in a Galerkin type discretization of the PDE which result in a block diagonal, sparse multiscale stiffness matrix. The blocks corresponding to different resolutions are completely decoupled, which makes the implementation of new wavelet finite element very simple and efficient. The solution is enriched adaptively and incrementally using finer scale wavelets. The new procedure completely eliminates wastage of resources associated with classical finite element refinement. Finally some numerical experiments are conducted to analyze the performance of this method.

  19. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  20. Force estimation and prediction from time-varying density images.

    PubMed

    Jagannathan, Srinivasan; Horn, Berthold Klaus Paul; Ratilal, Purnima; Makris, Nicholas Constantine

    2011-06-01

    We present methods for estimating forces which drive motion observed in density image sequences. Using these forces, we also present methods for predicting velocity and density evolution. To do this, we formulate and apply a Minimum Energy Flow (MEF) method which is capable of estimating both incompressible and compressible flows from time-varying density images. Both the MEF and force-estimation techniques are applied to experimentally obtained density images, spanning spatial scales from micrometers to several kilometers. Using density image sequences describing cell splitting, for example, we show that cell division is driven by gradients in apparent pressure within a cell. Using density image sequences of fish shoals, we also quantify 1) intershoal dynamics such as coalescence of fish groups over tens of kilometers, 2) fish mass flow between different parts of a large shoal, and 3) the stresses acting on large fish shoals. PMID:20921583

  1. Density estimation and random variate generation using multilayer networks.

    PubMed

    Magdon-Ismail, M; Atiya, A

    2002-01-01

    In this paper we consider two important topics: density estimation and random variate generation. We present a framework that is easily implemented using the familiar multilayer neural network. First, we develop two new methods for density estimation, a stochastic method and a related deterministic method. Both methods are based on approximating the distribution function, the density being obtained by differentiation. In the second part of the paper, we develop new random number generation methods. Our methods do not suffer from some of the restrictions of existing methods in that they can be used to generate numbers from any density provided that certain smoothness conditions are satisfied. One of our methods is based on an observed inverse relationship between the density estimation process and random number generation. We present two variants of this method, a stochastic, and a deterministic version. We propose a second method that is based on a novel control formulation of the problem, where a "controller network" is trained to shape a given density into the desired density. We justify the use of all the methods that we propose by providing theoretical convergence results. In particular, we prove that the L(infinity) convergence to the true density for both the density estimation and random variate generation techniques occurs at a rate O((log log N/N)((1-epsilon)/2)) where N is the number of data points and epsilon can be made arbitrarily small for sufficiently smooth target densities. This bound is very close to the optimally achievable convergence rate under similar smoothness conditions. Also, for comparison, the (2) root mean square (rms) convergence rate of a positive kernel density estimator is O(N(-2/5)) when the optimal kernel width is used. We present numerical simulations to illustrate the performance of the proposed density estimation and random variate generation methods. In addition, we present an extended introduction and bibliography that serves as an overview and reference for the practitioner. PMID:18244452

  2. DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda

    E-print Network

    Magdon-Ismail, Malik

    obtained by differentiation. In the second part of the paper, we develop new random number generation and random number generation. We present two variants of this method, a stochastic, and a deterministic convergence to the true density for both the density estimation and random variate generation techniques

  3. On pixel count based crowd density estimation for visual surveillance

    Microsoft Academic Search

    Ruihua MA; L. Li; W. Huang; Q. Tian

    2004-01-01

    Surveillance systems for public security are going beyond the conventional CCTV. A new generation of systems relies on image processing and computer vision techniques, deliver more ready-to-use information, and provide assistance for early detection of unusual events. Crowd density is a useful source of information because unusual crowdedness is often related to unusual events. Previous works on crowd density estimation

  4. Estimates of transition densities for Brownian motion on nested fractals

    Microsoft Academic Search

    Takashi Kumagai

    1993-01-01

    Summary We obtain upper and lower bounds for the transition densities of Brownian motion on nested fractals. Compared with the estimate on the Sierpinski gasket, the results require the introduction of a new exponent,dJ, related to the “shortest path metric” and “chemical exponent” on nested fractals. Further, Hölder order of the resolvent densities, sample paths and local times are obtained.

  5. Posterior consistency of logistic Gaussian process priors in density estimation

    Microsoft Academic Search

    Surya T. Tokdar; Jayanta K. Ghosh

    2007-01-01

    We establish weak and strong posterior consistency of Gaussian process priors studied by Lenk [1988. The logistic normal distribution for Bayesian, nonparametric, predictive densities. J. Amer. Statist. Assoc. 83 (402), 509–516] for density estimation. Weak consistency is related to the support of a Gaussian process in the sup-norm topology which is explicitly identified for many covariance kernels. In fact we

  6. Biorthogonal Box Spline Wavelet Bases Stephan Dahlke * , Karlheinz Grochenig y , and Vera Latour

    E-print Network

    Biorthogonal Box Spline Wavelet Bases Stephan Dahlke * , Karlheinz Gr¨ochenig y , and Vera Latour: Stephan Dahlke and Vera Latour Karlheinz Gr¨ochenig Institut f¨ur Geometrie und Department of Mathematics

  7. Wavelet-based multispectral face recognition LIU Dian-ting1

    E-print Network

    Shyu, Mei-Ling

    Wavelet-based multispectral face recognition LIU Dian-ting1 *, ZHOU Xiao-dan2 , and WANG Cheng-wen3, thermal infrared images have been used as complements of visible light images for face recognition. Long-wave

  8. Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity

    Microsoft Academic Search

    Ying-li Tian; Takeo Kanade; Jeffrey F. Cohn

    2002-01-01

    Previous work suggests that Gabor-wavelet-based meth- ods can achieve high sensitivity and specificity for emotion- specified expressions (e.g., happy, sad) and single action units (AUs) of the Facial Action Coding System (FACS). This paper evaluates a Gabor-wavelet-based method to rec- ognize AUs in image sequences of increasing complexity. A recognition rate of 83% is obtained for three single AUs when

  9. Wavelet-based representations for the 1\\/f family of fractal processes

    Microsoft Academic Search

    G. W. Wornell

    1993-01-01

    It is demonstrated that 1\\/f fractal processes are, in a broad sense, optimally represented in terms of orthonormal wavelet bases. Specifically, via a useful frequency-domain characterization for 1\\/f processes, the wavelet expansion's role as a Karhunen-Loeve-type expansion for 1\\/f processes is developed. As an illustration of potential, it is shown that wavelet-based representations naturally lead to highly efficient solutions to

  10. Non-iterative wavelet-based deconvolution for sparse aperturesystem

    NASA Astrophysics Data System (ADS)

    Xu, Wenhai; Zhao, Ming; Li, Hongshu

    2013-05-01

    Optical sparse aperture imaging is a promising technology to obtain high resolution but with a significant reduction in size and weight by minimizing the total light collection area. However, with the decreasing of collection area, its OTF is also greatly attenuated, and thus the directly imaging quality of sparse aperture system is very poor. In this paper, we focus on the post-processing methods for sparse aperture systems, and propose a non-iterative wavelet-based deconvolution algorithm. The algorithm is performed by adaptively denoising the Fourier-based deconvolution results on the wavelet basis. We set up a Golay-3 sparse-aperture imaging system, where the imaging and deconvolution experiments of the natural scenes are performed. The experiments demonstrate that the proposed method has greatly improved the imaging quality of Golay-3 sparse-aperture system, and produce satisfactory visual quality. Furthermore, our experimental results also indicate that the sparse aperture system has the potential to reach higher resolution with the help of better post-processing deconvolution techniques.

  11. Wavelet-based laser-induced ultrasonic inspection in pipes

    NASA Astrophysics Data System (ADS)

    Baltazar-López, Martín E.; Suh, Steve; Chona, Ravinder; Burger, Christian P.

    2006-02-01

    The feasibility of detecting localized defects in tubing using Wavelet based laser-induced ultrasonic-guided waves as an inspection method is examined. Ultrasonic guided waves initiated and propagating in hollow cylinders (pipes and/or tubes) are studied as an alternative, robust nondestructive in situ inspection method. Contrary to other traditional methods for pipe inspection, in which contact transducers (electromagnetic, piezoelectric) and/or coupling media (submersion liquids) are used, this method is characterized by its non-contact nature. This characteristic is particularly important in applications involving Nondestructive Evaluation (NDE) of materials because the signal being detected corresponds only to the induced wave. Cylindrical guided waves are generated using a Q-switched Nd:YAG laser and a Fiber Tip Interferometry (FTI) system is used to acquire the waves. Guided wave experimental techniques are developed for the measurement of phase velocities to determine elastic properties of the material and the location and geometry of flaws including inclusions, voids, and cracks in hollow cylinders. As compared to the traditional bulk wave methods, the use of guided waves offers several important potential advantages. Some of which includes better inspection efficiency, the applicability to in-situ tube inspection, and fewer evaluation fluctuations with increased reliability.

  12. Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

    PubMed

    Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

    2013-05-01

    Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired. PMID:23669913

  13. Ultrasonic velocity for estimating density of structural ceramics

    NASA Technical Reports Server (NTRS)

    Klima, S. J.; Watson, G. K.; Herbell, T. P.; Moore, T. J.

    1981-01-01

    The feasibility of using ultrasonic velocity as a measure of bulk density of sintered alpha silicon carbide was investigated. The material studied was either in the as-sintered condition or hot isostatically pressed in the temperature range from 1850 to 2050 C. Densities varied from approximately 2.8 to 3.2 g cu cm. Results show that the bulk, nominal density of structural grade silicon carbide articles can be estimated from ultrasonic velocity measurements to within 1 percent using 20 MHz longitudinal waves and a commercially available ultrasonic time intervalometer. The ultrasonic velocity measurement technique shows promise for screening out material with unacceptably low density levels.

  14. A Morpho-Density Approach to Estimating Neural Connectivity

    PubMed Central

    Tarigan, Bernadetta; van Pelt, Jaap; van Ooyen, Arjen; de Gunst, Mathisca

    2014-01-01

    Neuronal signal integration and information processing in cortical neuronal networks critically depend on the organization of synaptic connectivity. Because of the challenges involved in measuring a large number of neurons, synaptic connectivity is difficult to determine experimentally. Current computational methods for estimating connectivity typically rely on the juxtaposition of experimentally available neurons and applying mathematical techniques to compute estimates of neural connectivity. However, since the number of available neurons is very limited, these connectivity estimates may be subject to large uncertainties. We use a morpho-density field approach applied to a vast ensemble of model-generated neurons. A morpho-density field (MDF) describes the distribution of neural mass in the space around the neural soma. The estimated axonal and dendritic MDFs are derived from 100,000 model neurons that are generated by a stochastic phenomenological model of neurite outgrowth. These MDFs are then used to estimate the connectivity between pairs of neurons as a function of their inter-soma displacement. Compared with other density-field methods, our approach to estimating synaptic connectivity uses fewer restricting assumptions and produces connectivity estimates with a lower standard deviation. An important requirement is that the model-generated neurons reflect accurately the morphology and variation in morphology of the experimental neurons used for optimizing the model parameters. As such, the method remains subject to the uncertainties caused by the limited number of neurons in the experimental data set and by the quality of the model and the assumptions used in creating the MDFs and in calculating estimating connectivity. In summary, MDFs are a powerful tool for visualizing the spatial distribution of axonal and dendritic densities, for estimating the number of potential synapses between neurons with low standard deviation, and for obtaining a greater understanding of the relationship between neural morphology and network connectivity. PMID:24489738

  15. Tractable multivariate binary density estimation and the restricted Boltzmann forest.

    PubMed

    Larochelle, Hugo; Bengio, Yoshua; Turian, Joseph

    2010-09-01

    We investigate the problem of estimating the density function of multivariate binary data. In particular, we focus on models for which computing the estimated probability of any data point is tractable. In such a setting, previous work has mostly concentrated on mixture modeling approaches. We argue that for the problem of tractable density estimation, the restricted Boltzmann machine (RBM) provides a competitive framework for multivariate binary density modeling. With this in mind, we also generalize the RBM framework and present the restricted Boltzmann forest (RBForest), which replaces the binary variables in the hidden layer of RBMs with groups of tree-structured binary variables. This extension allows us to obtain models that have more modeling capacity but remain tractable. In experiments on several data sets, we demonstrate the competitiveness of this approach and study some of its properties. PMID:20569177

  16. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  17. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  18. ENSO forecast using a wavelet-based decomposition

    NASA Astrophysics Data System (ADS)

    Deliège, Adrien; Nicolay, Samuel; Fettweis, Xavier

    2015-04-01

    The aim of this work is to introduce a new method for forecasting major El Niño/ La Niña events with the use of a wavelet-based mode decomposition. These major events are related to sea surface temperature anomalies in the tropical Pacific Ocean: anomalous warmings are known as El Niño events, while excessive coolings are referred as La Niña episodes. These climatological phenomena are of primary importance since they are involved in many teleconnections; predicting them long before they occur is therefore a crucial concern. First, we perform a wavelet transform (WT) of the monthly sampled El Niño Southern Oscillation 3.4 index (from 1950 to present) and compute the associated scale spectrum, which can be seen as the energy carried in the WT as a function of the scale. It can be observed that the spectrum reaches five peaks, corresponding to time scales of about 7, 20, 31, 43 and 61 months respectively. Therefore, the Niño 3.4 signal can be decomposed into five dominant oscillating components with time-varying amplitudes, these latter being given by the modulus of the WT at the associated pseudo-periods. The reconstruction of the index based on these five components is accurate since more than 93% of the El Niño/ La Niña events of the last 60 years are recovered and no major event is erroneously predicted. Then, the components are smoothly extrapolated using polynomials and added together, giving so several years forecasts of the Niño 3.4 index. In order to increase the reliability of the forecasts, we perform several months hindcasts (i.e. retroactive probing forecasts) which can be validated with the existing data. It turns out that most of the major events can be accurately predicted up to three years in advance, which makes our methodology competitive for such forecasts. Finally, we discuss the El Niño conditions currently undergone and give indications about the next La Niña event.

  19. NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES.

    PubMed

    Seregin, Arseni; Wellner, Jon A

    2010-12-01

    We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ? ?(d) and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e(-y) for y ? ?; in this case, the resulting class of densities [Formula: see text]is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class.We first investigate when the maximum likelihood estimator p? exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y).We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e(-y)) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x(0) under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877

  20. Statistical Properties of Parasite Density Estimators in Malaria

    PubMed Central

    Hammami, Imen; Nuel, Grégory; Garcia, André

    2013-01-01

    Malaria is a global health problem responsible for nearly one million deaths every year around 85% of which concern children younger than five years old in Sub-Saharan Africa. In addition, around million clinical cases are declared every year. The level of infection, expressed as parasite density, is classically defined as the number of asexual parasites relative to a microliter of blood. Microscopy of Giemsa-stained thick blood films is the gold standard for parasite enumeration. Parasite density estimation methods usually involve threshold values; either the number of white blood cells counted or the number of high power fields read. However, the statistical properties of parasite density estimators generated by these methods have largely been overlooked. Here, we studied the statistical properties (mean error, coefficient of variation, false negative rates) of parasite density estimators of commonly used threshold-based counting techniques depending on variable threshold values. We also assessed the influence of the thresholds on the cost-effectiveness of parasite density estimation methods. In addition, we gave more insights on the behavior of measurement errors according to varying threshold values, and on what should be the optimal threshold values that minimize this variability. PMID:23516389

  1. Estimate of snow density knowing grain and share hardness

    NASA Astrophysics Data System (ADS)

    Valt, Mauro; Cianfarra, Paola; Cagnati, Anselmo; Chiambretti, Igor; Moro, Daniele

    2010-05-01

    Alpine avalanche warning services produces, weekly, snow profiles. Usually such profiles are made in horizontal snow fields, homogenously distributed by altitude and climatic micro-areas. Such profile allows grain shape, dimension and hardness (hand test) identification. Horizontal coring of each layer allows snow density identification. Such data allows the avalanche hazard evaluation and an estimation of the Snow Water Equivalent (SWE). Nevertheless the measurement of snow density, by coring, of very thin layers (less than 5 cm of thickness) is very difficult and are usually not measured by snow technicians. To bypass such problems a statistical analysis was performed to assign density values also to layers which cannot be measured. This system allows, knowing each layer thickness and its density, to correctly estimate SWE. This paper presents typical snow density values for snow hardness values and grain types for the Eastern Italian Alps. The study is based onto 2500 snow profiles with 17000 sampled snow layers from the Dolomites and Venetian Prealps (Eastern Alps). The table of typical snow density values for each grain type is used by YETI Software which elaborate snow profiles and automatically evaluate SWE. This method allows a better use of Avalanche Warning Services datasets for SWE estimation and local evaluation of SWE yearly trends for each snow field.

  2. A Wavelet-Based Noise Reduction Algorithm and Its Clinical Evaluation in Cochlear Implants

    PubMed Central

    Ye, Hua; Deng, Guang; Mauger, Stefan J.; Hersbach, Adam A.; Dawson, Pam W.; Heasman, John M.

    2013-01-01

    Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise. PMID:24086605

  3. An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments

    PubMed Central

    Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro

    2013-01-01

    In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

  4. Estimating Rio Grande wild turkey densities in Texas 

    E-print Network

    Locke, Shawn Lee

    2009-06-02

    the effectiveness of distance sampling from the air and ground to estimate wild turkey densities in the Edwards Plateau Ecoregion of Texas. Based on the literature review and the decision matrix, I determined two methods for field evaluation (i.e., infrared camera...

  5. Estimation of the Space Density of Low Surface Brightness Galaxies

    Microsoft Academic Search

    F. H. Briggs

    1997-01-01

    The space density of low surface brightness and tiny gas-rich dwarf galaxies are estimated for two recent catalogs: the Arecibo Survey of Northern Dwarf and Low Surface Brightness Galaxies and the Catalog of Low Surface Brightness Galaxies, List II. The goals are (1) to evaluate the additions to the completeness of the Fisher & Tully 10 Mpc sample and (2)

  6. Probability density estimation with tunable kernels using orthogonal forward regression.

    PubMed

    Chen, Sheng; Hong, Xia; Harris, Chris J

    2010-08-01

    A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately. PMID:20007052

  7. MDL Histogram Density Estimation Petri Kontkanen, Petri Myllymaki

    E-print Network

    Myllymäki, Petri

    on information theory, more specifically on the Minimum description length (MDL) principle developed selection problem. Our approach is based on the information-theoretic min- imum description length (MDL) principle, which can be applied for tasks such as data clustering, density estimation, image denois- ing

  8. Estimation of wind energy potential using different probability density functions

    Microsoft Academic Search

    Tian Pau Chang

    2011-01-01

    In addition to the probability density function (pdf) derived with maximum entropy principle (MEP), several kinds of mixture probability functions have already been applied to estimate wind energy potential in scientific literature, such as the bimodal Weibull function (WW) and truncated Normal Weibull function (NW). In this paper, two other mixture functions are proposed for the first time to wind

  9. Density estimation in tiger populations: combining information for strong inference.

    PubMed

    Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W

    2012-07-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved. PMID:22919919

  10. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  11. Ionospheric electron density profile estimation using commercial AM broadcast signals

    NASA Astrophysics Data System (ADS)

    Yu, De; Ma, Hong; Cheng, Li; Li, Yang; Zhang, Yufeng; Chen, Wenjun

    2015-08-01

    A new method for estimating the bottom electron density profile by using commercial AM broadcast signals as non-cooperative signals is presented in this paper. Without requiring any dedicated transmitters, the required input data are the measured elevation angles of signals transmitted from the known locations of broadcast stations. The input data are inverted for the QPS model parameters depicting the electron density profile of the signal's reflection area by using a probabilistic inversion technique. This method has been validated on synthesized data and used with the real data provided by an HF direction-finding system situated near the city of Wuhan. The estimated parameters obtained by the proposed method have been compared with vertical ionosonde data and have been used to locate the Shijiazhuang broadcast station. The simulation and experimental results indicate that the proposed ionospheric sounding method is feasible for obtaining useful electron density profiles.

  12. A Simple Deconvolving Kernel Density Estimator when Noise Is Gaussian

    Microsoft Academic Search

    Isabel Proença

    2005-01-01

    2. A Simple Deconvolving Kernel Density Estimator when Noise Is Gaussian\\u000a \\u000a Isabel Proença5 \\u000a \\u000a \\u000a \\u000a (4) \\u000a R. do Quelhas 2, 1200-781 Lisboa, Portugal\\u000a \\u000a \\u000a \\u000a \\u000a \\u000a \\u000a (5) \\u000a Instituto Superior de Economia e Gestão, Universidade Técnica de Lisboa, Portugal\\u000a \\u000a \\u000a Summary\\u000a Deconvolving kernel estimators when noise is Gaussian entail heavy calculations. In order to obtain the density estimates\\u000a numerical evaluation of a specific integral is needed. This work proposes an

  13. A comparison of minirhizotron techniques for estimating root length density in soils of different bulk densities

    Microsoft Academic Search

    K. M. Volkmar

    1993-01-01

    Flexible- and rigid-walled minirhizotron techniques were compared for estimating root length density of 14- to 28-day-old Pinto bean (Phaseolus vulgaris L.) and spring whet (Triticum aestivum L.) plants in soil boxes under controlled environment conditions at three soil bulk densities (1.3, 1.5 and 1.7 g cm-3). The flexible-tube system consisted of bicycle inner tubes inflated inside augered access holes and

  14. Estimation of Enceladus Plume Density Using Cassini Flight Data

    NASA Technical Reports Server (NTRS)

    Wang, Eric K.; Lee, Allan Y.

    2011-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.

  15. Scatterer Number Density Considerations in Reference Phantom Based Attenuation Estimation

    PubMed Central

    Rubert, Nicholas; Varghese, Tomy

    2014-01-01

    Attenuation estimation and imaging has the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and an SNR approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1,647 ROI's in 5 ex vivo bovine livers we find an envelope SNR of 1.10 ± 0.12 when imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article we examine attenuation estimation in numerical phantoms, TM phantoms with variable SND's, and ex vivo bovine liver prior to and following thermal coagulation. We find that reference phantom based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SND, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find the standard deviation of attenuation slope estimates increases from 0.07 dB/cm MHz to 0.25 dB/cm MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in TM phantoms with a large estimation kernel size (16 mm axially by 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (< 0.01 dB/cm MHz). We also compare results obtained with reference phantom based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800

  16. Estimating electric current densities in solar active regions

    E-print Network

    Wheatland, M S

    2015-01-01

    Electric currents in solar active regions are thought to provide the energy released via magnetic reconnection in solar flares. Vertical electric current densities $J_z$ at the photosphere may be estimated from vector magnetogram data, subject to substantial uncertainties. The values provide boundary conditions for nonlinear force- free modelling of active region magnetic fields. A method is presented for estimating values of $J_z$ taking into account uncertainties in vector magnetogram field values, and minimizing $J_z^2$ across the active region. The method is demonstrated using the boundary values of the field for a force-free twisted bipole, with the addition of noise at randomly chosen locations.

  17. An improved method of estimating ionisation density using TLDs

    Microsoft Academic Search

    M. Puchalska; P. Bilski

    2008-01-01

    A new method is proposed to determine the ‘effective’ linear energy transfer (LET) in mixed radiation fields, by analysing the radiation density dependence of the area of peak 8 in the thermoluminescence glow-curves of MTS-7 (LiF:Mg,Ti7) detectors. The dependence of the peak 8 area to the peak 5 area on the ‘effective’ LET allows the estimation of the LET to

  18. Estimation of the Space Density of Low Surface Brightness Galaxies

    Microsoft Academic Search

    F. H. Briggs

    1997-01-01

    The space density of low surface brightness and tiny gas-rich dwarf galaxies\\u000aare estimated for two recent catalogs: The Arecibo Survey of Northern Dwarf and\\u000aLow Surface Brightness Galaxies (Schneider, Thuan, Magri & Wadiak 1990) and The\\u000aCatalog of Low Surface Brightness Galaxy, List II (Schombert, Bothun, Schneider\\u000a& McGaugh 1992). The goals are (1) to evaluate the additions to

  19. A SIMPLE AND EFFICIENT WAVELET-BASED DENOISING ALGORITHM USING JOINT INTER-AND INTRASCALE STATISTICS ADAPTIVELY

    E-print Network

    Mirchandani, Gagan

    A SIMPLE AND EFFICIENT WAVELET-BASED DENOISING ALGORITHM USING JOINT INTER- AND INTRASCALE The University of Vermont Burlington, Vermont 05405, USA ABSTRACT We propose a simple and eÆcient image denoising of thresholding in signal denoising. The general procedure for wavelet-based denoising algorithms consists

  20. Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar

    NASA Astrophysics Data System (ADS)

    Vickers, H.; Kosch, M. J.; Sutton, E. K.; Ogawa, Y.; La Hoz, C.

    2012-12-01

    The unique coupling of the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the EISCAT Svalbard Radar (ESR) using the year-long operations of the first year of the International Polar Year (IPY) from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradient and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of IPY during the recent very quiet solar minimum. Comparison with the MSIS model shows that at 250 km, close to the F-layer peak the ESR estimates of the atomic oxygen density are typically a factor 1.2 smaller than the MSIS model when data are averaged over the IPY. Differences between MSIS and ESR estimates are found also to depend on both season and magnetic disturbance, with largest discrepancies noted during winter months. At 350 km, very close agreement with the MSIS model is achieved without evidence of seasonal dependence. This altitude was also close to the orbital altitude of the CHAMP satellite during IPY, allowing a comparison of in-situ measurements and radar estimates of the neutral density. Using a total of 10 in-situ passes by the CHAMP satellite above Svalbard, we show that the estimates made using this technique fall within the error bars of the measurements. We show that the method works best in the height range ~300-400 km where our assumptions are satisfied and we anticipate that the technique should be suitable for future thermospheric studies related to geomagnetic storm activity and long-term climate change.

  1. Estimating black bear density using DNA data from hair snares

    USGS Publications Warehouse

    Gardner, B.; Royle, J.A.; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  2. Some Bayesian statistical techniques useful in estimating frequency and density

    USGS Publications Warehouse

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  3. Volume estimation of multi-density nodules with thoracic CT

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas

    2014-03-01

    The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.

  4. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  5. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics.

    PubMed

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A; Calhoun, Vince D

    2011-02-14

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D denoising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional denoising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the denoised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of denoised wavelet coefficients for each voxel. Given the de-correlated nature of these denoised wavelet coefficients, it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules: First, in the analysis module we combine a new 3-D wavelet denoising approach with signal separation properties of ICA in the wavelet domain. This step helps obtain an activation component that corresponds closely to the true underlying signal, which is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing+spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positive voxels. PMID:21034833

  6. EFFECTIVE WAVELET-BASED REGULARIZATION OF DIVERGENCE-FREE FRACTIONAL BROWNIAN MOTION

    E-print Network

    Paris-Sud XI, Université de

    EFFECTIVE WAVELET-BASED REGULARIZATION OF DIVERGENCE-FREE FRACTIONAL BROWNIAN MOTION P. H´EAS, S-free fractional Brownian Motion (fBm). The method is based on fractional Laplacian and divergence-free waveletBm priors, by simply sampling wavelet coefficients according to Gaussian white noise. Fractional Laplacians

  7. Wavelet-Based Combined Signal Filtering and Olivier Renaud, Jean-Luc Starck, and Fionn Murtagh

    E-print Network

    Murtagh, Fionn

    1 Wavelet-Based Combined Signal Filtering and Prediction Olivier Renaud, Jean-Luc Starck, and Fionn Murtagh Abstract-- We survey a number of applications of the wavelet transform in time series prediction experimental assessment, we demonstrate the powerfulness of this methodology. Index Terms-- Wavelet transform

  8. Wavelet-Based Piecewise Approximation of Steady-State Waveforms for

    E-print Network

    Tse, Chi K. "Michael"

    Wavelet-Based Piecewise Approximation of Steady-State Waveforms for Power Electronics Circuits Kam Kong Polytechnic University, Hong Kong http://chaos.eie.polyu.edu.hk Abstract-- Wavelet transform has to maximize computational efficiency. In this paper, instead of applying one wavelet approximation

  9. Image denoising using fractal and wavelet-based methods K. U. Barthel*

    E-print Network

    Marpe, Detlev

    Image denoising using fractal and wavelet-based methods K. U. Barthel* , H. L. Cycon free image. The inverse wavelet transform of the fractal collage leads to the denoised image. Our, fractal image compression, fractal denoising, wavelet image denoising, image restoration 1. INTRODUCTION

  10. An elliptically contoured exponential mixture model for wavelet based image denoising

    Microsoft Academic Search

    Fei Shi; Ivan W. Selesnick

    2007-01-01

    An elliptically contoured exponential distribution is developed as a generalization of the univariate Laplacian distribution to multi-dimensions. A mixture of this model is used as the wavelet coefficient prior for Bayesian wavelet based image denoising. The mixture model has a small number of parameters yet fits the marginal distribution of wavelet coefficients well. Despite being a stationary probability model, it

  11. Wavelet-based image denoising using non-stationary stochastic geometrical image priors

    E-print Network

    Genève, Université de

    Wavelet-based image denoising using non-stationary stochastic geometrical image priors Sviatoslav and its superior performance in image denoising applications is demonstrated. The proposed model exploits-overlapping regions with distinctive statistics. A close form analytical solution of the image denoising problem

  12. Discrete directional wavelet bases and frames for image compression and denoising

    E-print Network

    Dragotti, Pier Luigi

    Discrete directional wavelet bases and frames for image compression and denoising Pier Luigi: Wavelets, denoising, Non-linear approximation 1. INTRODUCTION At the heart of many image processing tasks The application of the wavelet transform in image processing is most frequently based on a separable construction

  13. Iterative Regularization and Nonlinear Inverse Scale Space Applied to Wavelet Based Denoising

    E-print Network

    Soatto, Stefano

    1 Iterative Regularization and Nonlinear Inverse Scale Space Applied to Wavelet Based Denoising and the inverse scale space method, recently developed for total variation-based image restoration, to wavelet]­[5]) are among the most useful techniques for signal and image denoising. The relations between them have been

  14. A Thresholded Landweber Algorithm for Wavelet-based Sparse Poisson Deconvolution

    E-print Network

    Kingsbury, Nick

    A Thresholded Landweber Algorithm for Wavelet-based Sparse Poisson Deconvolution Ganchi Zhang a new iterative deconvolution algorithm for noisy Poisson images based on wavelet sparse regularization denoising stage proposed in [1] and a thresholded Landweber step [2], [3]. The key steps of our algorithm

  15. Efficient scrambling of wavelet-based compressed images: a comparison between simple techniques for mobile applications

    Microsoft Academic Search

    Giaime Ginesu; Tatiana Onali; Daniele D. Giusto

    2006-01-01

    Image scrambling is a fundamental task for several applications, from secure digital content transmission to mutual visual authentication. This paper proposes and evaluates several simple techniques with the aim of being efficient in respect to wavelet compression algorithms and fit for mobile applications. Then, the proposed methods are meant to comply with the average structure of wavelet-based coders and require

  16. header for SPIE use Diagnostically lossless medical image compression via wavelet-based

    E-print Network

    Qi, Xiaojun

    header for SPIE use Diagnostically lossless medical image compression via wavelet-based background are essential in archival and communication of medical images. In this paper, an automated wavelet compression, wavelet transform modulus maxima, convex hull, noise 1. INTRODUCTION The trend in medical imaging

  17. ISI/ICI COMPARISON OF DMT AND WAVELET BASED MCM SCHEMES FOR TIMEINVARIANT CHANNELS

    E-print Network

    Pfander, Götz

    ISI/ICI COMPARISON OF DMT AND WAVELET BASED MCM SCHEMES FOR TIME­INVARIANT CHANNELS Maria Charina environments. Currently used FFT based MCM schemes (DMT) outperform those based on wavelets regardless of which DMT of OFDM is standardized for the asymmetrical transmission over digital subscriber line) systems

  18. Embedded Wavelet-Based Compression of Hyperspectral Imagery Using Tarp Coding

    E-print Network

    Fowler, James E.

    Embedded Wavelet-Based Compression of Hyperspectral Imagery Using Tarp Coding Yonghui Wang, Justin is compared to that of other prominent coders for the compression of hyperspectral imagery, and state methodologies, for the compression of hyperspectral imagery. However, given the success of embedded wavelet

  19. Wavelet-Based Decompositions for Nonlinear Signal Processing Robert D. Nowak and Richard G. Baraniuk

    E-print Network

    Wavelet-Based Decompositions for Nonlinear Signal Processing Robert D. Nowak and Richard G and processing of real-world signals. This paper develops new signal decompositions for nonlinear analysis for the nonlinear signal decompositions. The nonlinear signal decompositions are also applied to signal processing

  20. 1852 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 7, JULY 1999 Wavelet-Based Transformations

    E-print Network

    1852 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 7, JULY 1999 Wavelet-Based Transformations for Nonlinear Signal Processing Robert D. Nowak, Member, IEEE, and Richard G. Baraniuk, Senior-world signals. In this paper, we introduce two new structures for nonlinear signal processing. The new

  1. On the phase condition and its solution for Hilbert transform pairs of wavelet bases

    Microsoft Academic Search

    Huseyin Ozkaramanli; Runyi Yu

    2003-01-01

    In this paper, the phase condition on the scaling filters of two wavelet bases that renders the corresponding wavelets as Hilbert transform pairs is studied. An alternative and equivalent phase condition is derived. With the equivalent condition and using Fourier series expansions, we show that the solution for which the corresponding scaling filters are offset from one another by a

  2. Multiresolution analysis on zero-dimensional Abelian groups and wavelets bases

    SciTech Connect

    Lukomskii, Sergei F [Saratov State University, Saratov (Russian Federation)

    2010-06-29

    For a locally compact zero-dimensional group (G,+{sup .}), we build a multiresolution analysis and put forward an algorithm for constructing orthogonal wavelet bases. A special case is indicated when a wavelet basis is generated from a single function through contractions, translations and exponentiations. Bibliography: 19 titles.

  3. Wavelet-based methods for the prognosis of mechanical and electrical failures in electric motors

    Microsoft Academic Search

    Wesley G. Zanardelli; Elias G. Strangas; Hassan K. Khalil; John M. Miller

    2005-01-01

    The ability to give a prognosis for failure of a system is a valuable tool and can be applied to electric motors. In this paper, three wavelet-based methods have been developed that achieve this goal. Wavelet and filter bank theory, the nearest-neighbour rule, and linear discriminant functions are reviewed. A framework for the development of a fault detection and classification

  4. Computer-aided Diagnosis of Melanoma Using Border and Wavelet-based Texture Analysis

    E-print Network

    Bailey, James

    1 Computer-aided Diagnosis of Melanoma Using Border and Wavelet-based Texture Analysis Rahil presents a novel computer-aided diagno- sis system for melanoma. The novelty lies in the optimised selec of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features

  5. Wavelet-Based Focus Measure and 3-D Surface Reconstruction Method for Microscopy Images

    Microsoft Academic Search

    Hui Xie; Weibin Rong; Lining Sun

    2006-01-01

    Microscopy imaging can not achieve both high resolution and wide image space simultaneously. Autofocusing and 3-D surface reconstruction techniques are of fundamental importance to automated micromanipulation in providing high lever task understanding, task planning and real time control. In this paper, a new wavelet-based focus measure is developed, which provides significantly better depth resolution accuracy, and robustness than previous ones.

  6. Revisiting multifractality of high-resolution temporal rainfall using a wavelet-based formalism

    Microsoft Academic Search

    V. Venugopal; Stéphane G. Roux; Efi Foufoula-Georgiou; Alain Arneodo

    2006-01-01

    We reexamine the scaling structure of temporal rainfall using wavelet-based methodologies which, as we demonstrate, offer important advantages compared to the more traditional multifractal approaches such as box counting and structure function techniques. In particular, we explore two methods based on the Continuous Wavelet Transform (CWT) and the Wavelet Transform Modulus Maxima (WTMM): the partition function method and the newer

  7. A projection and density estimation method for knowledge discovery.

    PubMed

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  8. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  9. Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar

    NASA Astrophysics Data System (ADS)

    Vickers, H.; Kosch, M. J.; Sutton, E.; Ogawa, Y.; La Hoz, C.

    2013-03-01

    Coupling between the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the European Incoherent Scatter Svalbard Radar (ESR) using the year-long operations of the International Polar Year from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradients and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of the International Polar Year during the recent very quiet solar minimum. The method works best in the height range ~300-400 km where our assumptions are satisfied. Differences between Mass Spectrometer and Incoherent Scatter and ESR estimates are found to vary with altitude, season, and magnetic disturbance, with the largest discrepancies during the winter months. A total of 9 out of 10 in situ passes by the CHAMP satellite above Svalbard at 350 km altitude agree with the ESR neutral density estimates to within the error bars of the measurements during quiet geomagnetic periods.

  10. Introduction to Copulas Parameterization of Copulas Parameter estimation Example: Imputation of Pima diabetes data Discussion Multivariate density estimation via copulas

    E-print Network

    Hoff, Peter

    of Pima diabetes data Discussion Multivariate density estimation via copulas Peter Hoff Statistics estimation Example: Imputation of Pima diabetes data Discussion Outline Introduction to Copulas Parameterization of Copulas Parameter estimation Example: Imputation of Pima diabetes data Discussion #12

  11. A wavelet-based multisensor data fusion algorithm

    Microsoft Academic Search

    Lijun Xu; Jian Qiu Zhang; Yong Yan

    2004-01-01

    This paper presents a wavelet transform-based data fusion algorithm for multisensor systems. With this algorithm, the optimum estimate of a measurand can be obtained in terms of minimum mean square error (MMSE). The variance of the optimum estimate is not only smaller than that of each observation sequence but also smaller than the arithmetic average estimate. To implement this algorithm,

  12. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a combination of the degree of sorting and the packing density% is more useful proxy for testing the degree of anisotropy and clustering in a point distribution.

  13. Estimation of probability densities using scale-free field theories.

    PubMed

    Kinney, Justin B

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided. PMID:25122244

  14. Estimation of probability densities using scale-free field theories

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  15. Effect of Random Clustering on Surface Damage Density Estimates

    SciTech Connect

    Matthews, M J; Feit, M D

    2007-10-29

    Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

  16. Estimation of the Space Density of Low Surface Brightness Galaxies

    E-print Network

    F. H. Briggs

    1997-02-24

    The space density of low surface brightness and tiny gas-rich dwarf galaxies are estimated for two recent catalogs: The Arecibo Survey of Northern Dwarf and Low Surface Brightness Galaxies (Schneider, Thuan, Magri & Wadiak 1990) and The Catalog of Low Surface Brightness Galaxy, List II (Schombert, Bothun, Schneider & McGaugh 1992). The goals are (1) to evaluate the additions to the completeness of the Fisher and Tully (1981) 10 Mpc Sample and (2) to estimate whether the density of galaxies contained in the new catalogs adds a significant amount of neutral gas mass to the the inventory of HI already identified in the nearby, present-epoch universe. Although tiny dwarf galaxies (M_HI < ~10^7 solar masses) may be the most abundant type of extragalactic stellar system in the nearby Universe, if the new catalogs are representative, the LSB and dwarf populations they contain make only a small addition (<10%) to the total HI content of the local Universe and probably constitute even smaller fractions of its luminous and dynamical mass.

  17. An Efficient Adaptive Thresholding Technique for Wavelet Based Image Denoising

    Microsoft Academic Search

    D. Gnanadurai; V. Sadasivam

    2006-01-01

    This frame work describes a computationally more efficient and adaptive threshold estimation method for image denoising in the wavelet domain based on Generalized Gaussian Distribution (GGD) modeling of subband coefficients. In this proposed method, the choice of the threshold estimation is carried out by analysing the statistical parameters of the wavelet subband coefficients like standard deviation, arithmetic mean and geometrical

  18. Wavelet-based image denoising using generalized cross validation

    Microsoft Academic Search

    Maarten Jansen; Adhemar Bultheel

    1997-01-01

    De-noising algorithms based on wavelet thresholding replace small wavelet coefficients by zero and keep or shrink the coefficients with absolute value above the threshold. The optimal threshold minimizes the error of the result as compared to the unknown, exact data. To estimate this optimal threshold, we use generalized cross validation. This procedure does not require an estimation for the noise

  19. Ten years of probabilistic estimates of biocrystal solvent content: new insights via nonparametric kernel density estimate.

    PubMed

    Weichenberger, Christian X; Rupp, Bernhard

    2014-06-01

    The probabilistic estimate of the solvent content (Matthews probability) was first introduced in 2003. Given that the Matthews probability is based on prior information, revisiting the empirical foundation of this widely used solvent-content estimate is appropriate. The parameter set for the original Matthews probability distribution function employed in MATTPROB has been updated after ten years of rapid PDB growth. A new nonparametric kernel density estimator has been implemented to calculate the Matthews probabilities directly from empirical solvent-content data, thus avoiding the need to revise the multiple parameters of the original binned empirical fit function. The influence and dependency of other possible parameters determining the solvent content of protein crystals have been examined. Detailed analysis showed that resolution is the primary and dominating model parameter correlated with solvent content. Modifications of protein specific density for low molecular weight have no practical effect, and there is no correlation with oligomerization state. A weak, and in practice irrelevant, dependency on symmetry and molecular weight is present, but cannot be satisfactorily explained by simple linear or categorical models. The Bayesian argument that the observed resolution represents only a lower limit for the true diffraction potential of the crystal is maintained. The new kernel density estimator is implemented as the primary option in the MATTPROB web application at http://www.ruppweb.org/mattprob/. PMID:24914969

  20. Estimating tropical-forest density profiles from multibaseline interferometric SAR

    NASA Technical Reports Server (NTRS)

    Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

    2006-01-01

    Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

  1. Comparison of two methods of estimation of low density lipoprotein cholesterol, the direct versus friedewald estimation

    Microsoft Academic Search

    Suchanda Sahu; Rajinder Chawla; Bharti Uppal

    2005-01-01

    Current recommendations of the Adult Treatment Panel and Adolescents Treatment Panel of National Cholesterol Education Program\\u000a make the low-density lipoprotein cholesterol (LDL-C) levels in serum the basis of classification and management of hypercholesterolemia.\\u000a A number of direct homogenous assays based on surfactant\\/solubility principles have evolved in the recent past. This has made\\u000a LDL-C estimation less cumbersome than the earlier used

  2. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy.

    PubMed

    Chang, Ching-Wei; Mycek, Mary-Ann

    2012-05-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891

  3. Wavelet Based Image Denoising Using Intra Scale Dependency

    Microsoft Academic Search

    HYUN-YOUNG GO; JI-BUM LEE; HYUNG-HWA KO

    2006-01-01

    This paper mainly focuses on the development of using wavelet coefficients' intra scale dependency of natural images. Wavelet transform(WT) coefficients have statistical dependency. WT coefficients have dependency between local coefficients (intra-scale). This paper uses dependency between children's coefficient for estimating corrupted by noise. For this purpose, we derive the bivariate model using correlation between coefficients and shrinkage function for denoising.

  4. Wavelet-Based Functional Mixed Models Jeffrey S. Morris 1

    E-print Network

    Morris, Jeffrey S.

    quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model framework. It yields nonparametric estimates of the fixed and random effects functions as well de- veloping methodology to perform inference is needed. The complexity and high dimensionality

  5. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    NASA Technical Reports Server (NTRS)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  6. A wavelet-based image denoising using least squares support vector machine

    Microsoft Academic Search

    Xiang-Yang Wang; Zhong-Kai Fu

    2010-01-01

    The least squares support vector machine (LS-SVM) is a modified version of SVM, which uses the equality constraints to replace the original convex quadratic programming problem. Consequently, the global minimizer is much easier to obtain in LS-SVM by solving the set of linear equation. LS-SVM has shown to exhibit excellent classification performance in many applications. In this paper, a wavelet-based

  7. MULTIVARIATE QUASI-LAPLACIAN MIXTURE MODELS FOR WAVELET-BASED IMAGE DENOISING

    Microsoft Academic Search

    Fei Shi; Ivan W. Selesnick

    In this paper we introduce a class of multivariate quasi-Laplacian models as a generalization of the single-variable Laplacian distribu- tion to multi-dimensions. A mixture model is used as the wavelet coefficient prior for the wavelet-based Bayesian image denoising al- gorithm. As a multivariate probability model, it is able to capture the intra-scale or inter-scale dependencies among wavelet coefficients. Two special

  8. Simulating 2D Waves Propagation in Elastic Solid Media Using Wavelet Based Adaptive Method

    Microsoft Academic Search

    H. Yousefi; A. Noorzad; J. Farjoodi

    2010-01-01

    In this study, an improved wavelet-based adaptive-grid method is presented for solving the second order hyperbolic Partial\\u000a Differential Equations (PDEs) for describing the waves propagation in elastic solid media. In this method, the multiresolution\\u000a adaptive threshold-based approach is incorporated with smoothing splines as denoiser of spurious oscillations. This smoothing\\u000a method is fast, stable, less sensitive to noise, and directly applicable

  9. Wavelet-based Bayesian fusion of multispectral and hyperspectral images using Gaussian scale mixture model

    Microsoft Academic Search

    Yifan Zhang

    2012-01-01

    In this article, a wavelet-based Bayesian fusion framework is presented, in which a low spatial resolution hyperspectral (HS) image is fused with a high spatial resolution multispectral (MS) image by accounting for the joint statistics. Particularly, a zero-mean heavy-tailed model, Gaussian scale mixture model, is employed as the prior, which is believed to be capable of modelling the distribution of

  10. Wavelet-based Bayesian fusion of multispectral and hyperspectral images using Gaussian scale mixture model

    Microsoft Academic Search

    Yifan Zhang

    2011-01-01

    In this article, a wavelet-based Bayesian fusion framework is presented, in which a low spatial resolution hyperspectral (HS) image is fused with a high spatial resolution multispectral (MS) image by accounting for the joint statistics. Particularly, a zero-mean heavy-tailed model, Gaussian scale mixture model, is employed as the prior, which is believed to be capable of modelling the distribution of

  11. Target Identification Using Wavelet-based Feature Extraction and Neural Network Classifiers

    Microsoft Academic Search

    Jose E. Lopez; Hung Han Chen; Jennifer Saulnier

    Classification of combat vehicle types based on acoustic and seismic signals remains a challenging task due to temporal and frequency variability that exists in these passively collected vehicle indicators. This paper presents the results of exploiting the wavelet characteristic of projecting signal dynamics to an efficient temporal\\/scale (i.e. frequency) decomposition and extracting from that process a set of wavelet-based features

  12. Wavelet-based acoustic emission analysis of material fatigue behavior: Bone cement

    NASA Astrophysics Data System (ADS)

    Ng, Eng-Teik

    2000-12-01

    A methodology developed in this dissertation, based on the time-frequency analysis of acoustic emission (AE) signal generated by the cyclic loading of bone cement specimens is presented. The discrete wavelet transform is utilized. The advantages of this method are to eliminate the noise from AE signal and provide the multi-resolution analysis. To demonstrate the capability of this proposed method, the Palacos R bone cement is selected as an example. The compact tension specimens are prepared by hand mixing (HM) and vacuum mixing (VM) methods. The AE signal is decomposed into different wavelet levels by the Daubechies' discrete wavelet transform. The D3 and A3 wavelet levels are chosen for wavelet analysis. The frequency spectral in levels D3 and A 3 are 180 kHz and 110 kHz, respectively. Over 90% of the ratio of reconstructed energy to total energy is used to identify noises and insignificant components in the signal. The free noise of AE signal is used to determine the coefficients of the relationship between the wavelet-based AE energy rate, dEdN and the stress intensity factor range, DeltaKI for both HM and VM specimens. Based on the statistical analysis, the results show that the VM method does not significantly effect the slope (tau) of wavelet-based AE model. However, the wavelet-based AE energy rate of VM specimen is one order of magnitude less than that of HM specimen. In other words, the VM method significantly reduces the fatigue crack propagation rate of bone cement. Moreover, the fatigue life prediction model based on the wavelet transform is developed to determine the residual fatigue life of material. In summary, the wavelet-based AE technique can distinguish the difference of intercepts between HM and VM specimens and provide accurate results. Therefore, this proposed method provides an efficient tool to study the fatigue crack propagation behavior of materials.

  13. Wavelet-based motion artifact removal for functional near-infrared spectroscopy

    Microsoft Academic Search

    Behnam Molavi; Guy A Dumont

    2012-01-01

    Functional near-infrared spectroscopy (fNIRS) is a powerful tool for monitoring brain functional activities. Due to its non-invasive and non-restraining nature, fNIRS has found broad applications in brain functional studies. However, for fNIRS to work well, it is important to reduce its sensitivity to motion artifacts. We propose a new wavelet-based method for removing motion artifacts from fNIRS signals. The method

  14. Efficiency analysis of parallelized wavelet-based FDTD model for simulating high-index optical devices

    NASA Astrophysics Data System (ADS)

    Ren, Rong; Wang, Jin; Jiang, Xiyan; Lu, Yunqing; Xu, Ji

    2014-10-01

    The finite-difference time-domain (FDTD) method, which solves time-dependent Maxwell's curl equations numerically, has been proved to be a highly efficient technique for numerous applications in electromagnetic. Despite the simplicity of the FDTD method, this technique suffers from serious limitations in case that substantial computer resource is required to solve electromagnetic problems with medium or large computational dimensions, for example in high-index optical devices. In our work, an efficient wavelet-based FDTD model has been implemented and extended in a parallel computation environment, to analyze high-index optical devices. This model is based on Daubechies compactly supported orthogonal wavelets and Deslauriers-Dubuc interpolating functions as biorthogonal wavelet bases, and thus is a very efficient algorithm to solve differential equations numerically. This wavelet-based FDTD model is a high-spatial-order FDTD indeed. Because of the highly linear numerical dispersion properties of this high-spatial-order FDTD, the required discretization can be coarser than that required in the standard FDTD method. In our work, this wavelet-based FDTD model achieved significant reduction in the number of cells, i.e. used memory. Also, as different segments of the optical device can be computed simultaneously, there was a significant gain in computation time. Substantially, we achieved speed-up factors higher than 30 in comparisons to using a single processor. Furthermore, the efficiency of the parallelized computation such as the influence of the discretization and the load sharing between different processors were analyzed. As a conclusion, this parallel-computing model is promising to analyze more complicated optical devices with large dimensions.

  15. ICER3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    Microsoft Academic Search

    A. Kiely; M. Klimesh; H. Xie; N. Aranki

    2006-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide loss- less and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decom- position structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating

  16. Lithological discrimination using a Wavelet Based Fractal Analysis at the Teapot Dome Field, Wyoming-USA

    NASA Astrophysics Data System (ADS)

    García, Alejandro; Aldana, Milagrosa; Cabrera, Ana

    2013-04-01

    In this work, we have applied a Wavelet Based Fractal Analysis (WBFA) to well logs and seismic data at the Teapot Dome Field, Natrona Country, Wyoming-USA, trying to characterize a reservoir using fractal parameters, as intercept (b), slope (m) and fractal dimension (D), and to correlate them with the sedimentation processes and/or the lithological characteristics of the area. The WBFA was first applied to the available logs (Gamma Ray, Spontaneous Potential, Density, Neutron Porosity and Deep Resistivity) from 20 wells located at sectors 27, 28, 33 and 34 of the 3D seismic of the Teapot Dome field. Also the WBFA was applied to the calculated curve of water saturation (Sw). At a second step, the method was used to analyze a set of seismic traces close to the studied wells, extracted from the 3D seismic data. Maps of the fractal parameters were obtained. A spectral analysis of the seismic data was also performed in order to identify seismic facies and to establish a possible correlation with the fractal results. The WBFA results obtained for the wells logs indicate a correlation between fractal parameters and the lithological content in the studied interval (i.e. top-base of the Frontier Formation). Particularly, for the Gamma Ray logs the fractal dimension D can be correlated with the sand-shale content: values of D lower than 0.9 are observed for those wells with more sand content (sandy wells); values of D between 0.9 and 1.1 correspond to wells where the sand packs present numerous inter-bedded shale layers (sandy-shale wells); finally, wells with more shale content (shaly wells) have D values greater than 1.1. The analysis of the seismic traces allowed the discrimination of shaly from sandy zones. The D map generated for the seismic traces indicates that this value can be associated with the shale content in the area. The iso-frequency maps obtained from the seismic spectral analysis show trends associated to the lithology of the field. These trends are similar to those observed in the maps of the fractal parameters, indicating that both analyses respond to lithological and/or sedimentation features in the area.

  17. Three-dimensional medical image modeling of scattered data by using wavelet-based criterion

    NASA Astrophysics Data System (ADS)

    Lee, Kun

    2001-11-01

    Computerized Tomography medical images contain significant low intensity black regions along image boundaries. This paper discusses the visualization method by using wavelet based data dependent criterion: The wavelet based criterion is minimizing the difference of wavelet coefficients across the common face. In the first procedure, the important data are selected based on the coefficients of wavelet from the cuberille data. Marching cubes method is not appropriate when the data are scattered data. Data dependent terahedrization is one of pre-processing steps for trivariate scattered data interpolation. The quality of an interpolation depends not only on the distribution of the data point in 3D space, but also on the data values. We apply wavelet-based criterion for each tetrahedron. To achieve the smooth transition, we minimize the difference between each adjacent surface to achieve a nearly C1. Minimizing the difference of wavelet coefficients is related to achieve the smooth transition. Simulated annealing algorithm is employed to achieve the global optimum for a wide class of optimization criteria. The results of trivariated scattered data interpolation is visualized through an iso-surface rendering. The visualization algorithm of this study was implemented on an O2 workstation of Silicon Graphics Systems.

  18. Comparative study of different wavelet based neural network models for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

    2014-07-01

    The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

  19. WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering

    PubMed Central

    Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin

    2012-01-01

    Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. Availability: WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. Contact: statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa PMID:22328784

  20. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  1. Wavelet-based image denoising using variance field diffusion

    NASA Astrophysics Data System (ADS)

    Liu, Zhenyu; Tian, Jing; Chen, Li; Wang, Yongtao

    2012-04-01

    Wavelet shrinkage is an image restoration technique based on the concept of thresholding the wavelet coefficients. The key challenge of wavelet shrinkage is to find an appropriate threshold value, which is typically controlled by the signal variance. To tackle this challenge, a new image restoration approach is proposed in this paper by using a variance field diffusion, which can provide more accurate variance estimation. Experimental results are provided to demonstrate the superior performance of the proposed approach.

  2. Multispectral Remote Sensing Image Classification Using Wavelet Based Features

    Microsoft Academic Search

    Saroj K. Meher; Bhavan Uma Shankar; Ashish Ghosh

    Multispectral remotely sensed images composed information over a large range of variation on frequencies (information) and\\u000a these frequencies change over different regions (irregular or frequency variant behavior of the signal) which need to be estimated\\u000a properly for an improved classification [1, 2, 3]. Multispectral remote sensing (RS) image data are basically complex in nature,\\u000a which have both spectral features with

  3. Wavelet based analysis of rotational motion in digital image sequences

    Microsoft Academic Search

    Mingqi Kong; J.-P. Leduc; B. K. Ghosh; J. Corbett; V. M. Wickerhauser

    1998-01-01

    This paper addresses the problem of estimating, analyzing and tracking objects moving with spatio-temporal rotational motion (spin or orbit). It is assumed that the digital signals of interest are acquired from a camera and structured as digital image sequences. The trajectories in the signal are two-dimensional spatial projections in time of motion taking place in a three-dimensional space. The purpose

  4. Wavelet-Based Real-Time Diagnosis of Complex Systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Mackey, Ryan

    2003-01-01

    A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

  5. Hidden Markov models for wavelet-based blind source separation.

    PubMed

    Ichir, Mahieddine M; Mohammad-Djafari, Ali

    2006-07-01

    In this paper, we consider the problem of blind source separation in the wavelet domain. We propose a Bayesian estimation framework for the problem where different models of the wavelet coefficients are considered: the independent Gaussian mixture model, the hidden Markov tree model, and the contextual hidden Markov field model. For each of the three models, we give expressions of the posterior laws and propose appropriate Markov chain Monte Carlo algorithms in order to perform unsupervised joint blind separation of the sources and estimation of the mixing matrix and hyper parameters of the problem. Indeed, in order to achieve an efficient joint separation and denoising procedures in the case of high noise level in the data, a slight modification of the exposed models is presented: the Bernoulli-Gaussian mixture model, which is equivalent to a hard thresholding rule in denoising problems. A number of simulations are presented in order to highlight the performances of the aforementioned approach: 1) in both high and low signal-to-noise ratios and 2) comparing the results with respect to the choice of the wavelet basis decomposition. PMID:16830910

  6. Learning Multisensory Integration and Coordinate Transformation via Density Estimation

    PubMed Central

    Sabes, Philip N.

    2013-01-01

    Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588

  7. Learning multisensory integration and coordinate transformation via density estimation.

    PubMed

    Makin, Joseph G; Fellows, Matthew R; Sabes, Philip N

    2013-04-01

    Sensory processing in the brain includes three key operations: multisensory integration-the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations-the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned-but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588

  8. A wavelet-based quadtree driven stereo image coding

    NASA Astrophysics Data System (ADS)

    Bensalma, Rafik; Larabi, Mohamed-Chaker

    2009-02-01

    In this work, a new stereo image coding technique is proposed. The new approach integrates the coding of the residual image with the disparity map. The latter computed in the wavelet transform domain. The motivation behind using this transform is that it imitates some properties of the human visual system (HVS), particularly, the decomposition in the perspective canals. Therefore, using the wavelet transform allows for better perceptual image quality preservation. In order to estimate the disparity map, we used a quadtree segmentation in each wavelet frequency band. This segmentation has the advantage of minimizing the entropy. Dyadic squares in the subbands of target image that they are not matched with other in the reference image constitutes the residuals are coded by using an arithmetic codec. The obtained results are evaluated by using the SSIM and PSNR criteria.

  9. A novel ultrasound methodology for estimating spine mineral density.

    PubMed

    Conversano, Francesco; Franchini, Roberto; Greco, Antonio; Soloperto, Giulia; Chiriacò, Fernanda; Casciaro, Ernesto; Aventaggiato, Matteo; Renna, Maria Daniela; Pisani, Paola; Di Paola, Marco; Grimaldi, Antonella; Quarta, Laura; Quarta, Eugenio; Muratore, Maurizio; Laugier, Pascal; Casciaro, Sergio

    2015-01-01

    We investigated the possible clinical feasibility and accuracy of an innovative ultrasound (US) method for diagnosis of osteoporosis of the spine. A total of 342 female patients (aged 51-60 y) underwent spinal dual X-ray absorptiometry and abdominal echographic scanning of the lumbar spine. Recruited patients were subdivided into a reference database used for US spectral model construction and a study population for repeatability and accuracy evaluation. US images and radiofrequency signals were analyzed via a new fully automatic algorithm that performed a series of spectral and statistical analyses, providing a novel diagnostic parameter called the osteoporosis score (O.S.). If dual X-ray absorptiometry is assumed to be the gold standard reference, the accuracy of O.S.-based diagnoses was 91.1%, with k = 0.859 (p < 0.0001). Significant correlations were also found between O.S.-estimated bone mineral densities and corresponding dual X-ray absorptiometry values, with r(2) values up to 0.73 and a root mean square error of 6.3%-9.3%. The results obtained suggest that the proposed method has the potential for future routine application in US-based diagnosis of osteoporosis. PMID:25438845

  10. Wavelet-based stereo images reconstruction using depth images

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2007-09-01

    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance measures.

  11. ENVIRONMENTAL AUDITING: Demonstration of Line Transect Methodologies to Estimate Urban Gray Squirrel Density

    PubMed

    Hein

    1997-11-01

    / Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% CI = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.KEY WORDS: Bias; Density; Distance sampling; Gray squirrel; Line transect; Sciurus carolinensis. PMID:9336490

  12. Joint discrepancy evaluation of an existing steel bridge using time-frequency and wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Walia, Suresh Kumar; Patel, Raj Kumar; Vinayak, Hemant Kumar; Parti, Raman

    2013-12-01

    The objective of this study is to bring out the errors introduced during construction which are overlooked during the physical verification of the bridge. Such errors can be pointed out if the symmetry of the structure is challenged. This paper thus presents the study of downstream and upstream truss of newly constructed steel bridge using time-frequency and wavelet-based approach. The variation in the behavior of truss joints of bridge with variation in the vehicle speed has been worked out to determine their flexibility. The testing on the steel bridge was carried out with the same instrument setup on both the upstream and downstream trusses of the bridge at two different speeds with the same moving vehicle. The nodal flexibility investigation is carried out using power spectral density, short-time Fourier transform, and wavelet packet transform with respect to both the trusses and speed. The results obtained have shown that the joints of both upstream and downstream trusses of the bridge behave in a different manner even if designed for the same loading due to constructional variations and vehicle movement, in spite of the fact that the analytical models present a simplistic model for analysis and design. The difficulty of modal parameter extraction of the particular bridge under study increased with the increase in speed due to decreased excitation time.

  13. Demonstration of line transect methodologies to estimate urban gray squirrel density

    SciTech Connect

    Hein, E.W. [Los Alamos National Lab., NM (United States)] [Los Alamos National Lab., NM (United States)

    1997-11-01

    Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.

  14. Tropical forests and the global carbon cycle: Estimating state and change in biomass density. Book chapter

    SciTech Connect

    Brown, S.

    1996-07-01

    This chapter discusses estimating the biomass density of forest vegetation. Data from inventories of tropical Asia and America were used to estimate biomass densities. Efforts to quantify forest disturbance suggest that population density, at subnational scales, can be used as a surrogate index to encompass all the anthropogenic activities (logging, slash-and-burn agriculture, grazing) that lead to degradation of tropical forest biomass density.

  15. An economic prediction of refinement coefficients in wavelet-based adaptive methods for electron structure calculations.

    PubMed

    Pipek, János; Nagy, Szilvia

    2013-03-01

    The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109

  16. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  17. Supervised and Reinforcement Evolutionary Learning for Wavelet-based Neuro-fuzzy Networks

    Microsoft Academic Search

    Cheng-jian Lin; Yong-cheng Liu; Chi-yung Lee

    2008-01-01

    This study presents a wavelet-based neuro-fuzzy network (WNFN). The proposed WNFN model combines the traditional Takagi–Sugeno–Kang\\u000a (TSK) fuzzy model and the wavelet neural networks (WNN). This study adopts the non-orthogonal and compactly supported functions\\u000a as wavelet neural network bases. A novel supervised evolutionary learning, called WNFN-S, is proposed to tune the adjustable\\u000a parameters of the WNFN model. The proposed WNFN-S

  18. Nonparametric Density Estimation, Prediction, and Regression for Markov Sequences

    Microsoft Academic Search

    Sidney J. Yakowitz

    1985-01-01

    Let {Xi} be a stationary Markov sequence having a transition probability density function f(y | x) giving the pdf of Xi +1 | (Xi = x). In this study, nonparametric density and regression techniques are employed to infer f(y | x) and m(x) = E[Xi + 1 | Xi = x]. It is seen that under certain regularity and Markovian

  19. Asymptotic Equivalence of Density Estimation and Gaussian White Noise

    E-print Network

    Nussbaum, Michael

    the analogous problem for the experiment given by n i. i. d. observations having density f on the unit interval with density f is globally asymptotically equivalent to a white noise experiment with drift f1/2 and variance 1 4 n-1 . This represents a nonparametric analog of Le Cam's heteroscedastic Gaussian approximation

  20. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  1. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  2. Robust scale estimate for the generalized Gaussian Probability Density Function

    Microsoft Academic Search

    Rozenn Dahyot; Simon Wilson

    This article proposes a robust way to estimate the scale param- eter of a generalised centered Gaussian mixture. The principle relies on the association of samples of this mixture to gener- ate samples of a new variable that shows relevant distribution properties to estimate the unknown parameter. In fact, the dis- tribution of this new variable shows a maximum that

  3. A wavelet-based bootstrap method applied to inertial sensor stochastic error modelling using the Allan variance

    NASA Astrophysics Data System (ADS)

    Sabatini, Angelo Maria

    2006-11-01

    A wavelet-based bootstrap method is proposed to generate surrogate data from inertial sensor noise time series and to construct bootstrap-based confidence intervals of selected parameters which are used to characterize their noise performance. The Allan variance, its links with wavelets and the whitening action of wavelet decompositions applied to long-memory stochastic processes are considered in developing the theory behind the proposed method. The conditions for the wavelet-based bootstrap method to work are discussed in the face of idiosyncrasies of inertial sensors, especially microelectromechanical systems-based (MEMS) inertial sensors. Computer simulation experiments demonstrate the validity of the method and its power in doing the statistical inference from a moderately small-size dataset; additionally, the wavelet-based bootstrap method is applied to the task of stochastic error characterization, in the case of a MEMS orientation sensor, which integrates a tri-axis gyro and a tri-axis accelerometer.

  4. Wavelet-Based Method for Instability Analysis in Boiling Water Reactors

    SciTech Connect

    Espinosa-Paredes, Gilberto; Prieto-Guerrero, Alfonso; Nunez-Carrera, Alejandro; Amador-Garcia, Rodolfo

    2005-09-15

    This paper introduces a wavelet-based method to analyze instability events in a boiling water reactor (BWR) during transient phenomena. The methodology to analyze BWR signals includes the following: (a) the short-time Fourier transform (STFT) analysis, (b) decomposition using the continuous wavelet transform (CWT), and (c) application of multiresolution analysis (MRA) using discrete wavelet transform (DWT). STFT analysis permits the study, in time, of the spectral content of analyzed signals. The CWT provides information about ruptures, discontinuities, and fractal behavior. To detect these important features in the signal, a mother wavelet has to be chosen and applied at several scales to obtain optimum results. MRA allows fast implementation of the DWT. Features like important frequencies, discontinuities, and transients can be detected with analysis at different levels of detail coefficients. The STFT was used to provide a comparison between a classic method and the wavelet-based method. The damping ratio, which is an important stability parameter, was calculated as a function of time. The transient behavior can be detected by analyzing the maximum contained in detail coefficients at different levels in the signal decomposition. This method allows analysis of both stationary signals and highly nonstationary signals in the timescale plane. This methodology has been tested with the benchmark power instability event of Laguna Verde nuclear power plant (NPP) Unit 1, which is a BWR-5 NPP.

  5. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    PubMed Central

    G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

  6. Weak convergence of local random elds of kernel density estimators

    E-print Network

    Nishiyama, Yoichi

    elementary that we may safely omit a bibliographical review. However, since the density f is originallyde ned not intrinsically make sense. Thus, an assertion in some functional sense is preferable in order for, e

  7. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.

  8. Some Results Regarding the Estimation of Densities and Random Variate Generation Using Neural Networks

    E-print Network

    random number generation methods. Our methods do not suffer from some of the restrictions of existing the density estimation process and the random number generation process. We present two variants. Keywords: density estimation, random number generation, distribution function, multilayer network, neural

  9. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  10. A NOVEL ESTIMATION METHOD FOR PREDICTING SPATIAL DENSITY OF FERAL SWINE USING ECOLOGICAL DATA

    Microsoft Academic Search

    S. Rollo; L. D. Highfield; M. P. Ward

    Summary Using social structure, animal behavior, and ecological landscape variables, a population density distribution of feral swine was created for use in a wildlife disease simulation model. Hydrology and land use data bases were combined into a continuous matrix and aggregated estimates of feral swine densities were spatially distributed based on these variables in proportion to their estimated use. The

  11. We report on a successful implementation of a wavelet-based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of

    E-print Network

    Terzi, Bal?a

    PRECONDITIONED CONJUGATE GRADIENT (CPCG) WAVELET-BASED POISSON SOLVER The continuous wavelet transform to the modelling of the Fermilab/NICADD and the AES/JLab photoinjectors. WAVELET TRANSFORM ANALYSIS CONSTRAINEDAbstract We report on a successful implementation of a wavelet-based Poisson solver for use in 3D

  12. Wavelet Based SAR Speckle Reduction and Image Compression J. E. Odegard, H. Guo, M. Lang, C. S. Burrus, R. O. Wells, Jr.

    E-print Network

    Wavelet Based SAR Speckle Reduction and Image Compression J. E. Odegard, H. Guo, M. Lang, C. S the performance of the recently published wavelet based algorithm for speckle reduction of SAR images at Lincoln Laboratory (LL). The LL benchmarks show that the SAR imagery is signi cantly enhanced perceptually

  13. Density-ratio robustness in dynamic state estimation Alessio Benavoli and Marco Zaffalon

    E-print Network

    Zaffalon, Marco

    closed convex set of probabilities that is known with the name of density ratio class or constant odds-ratioDensity-ratio robustness in dynamic state estimation Alessio Benavoli and Marco Zaffalon Istituto of distributions. Second, after revising the properties of the density ratio class in the context of parametric

  14. An Evaluation of the Accuracy of Kernel Density Estimators for Home Range Analysis

    Microsoft Academic Search

    D. Erran Seaman; Roger A. Powell

    2008-01-01

    Abstract. Kernel density estimators are becoming more widely used, particularly as home range estimators. Despite extensive interest in their theoretical properties, little em- pirical research,has been,done,to investigate,their performance,as home,range estimators. We used,computer,simulations,to compare,the area and shape,of kernel density estimates to the true area and shape,of multimodal,two-dimensional,distributions. The fixed kernel gave,area estimates,with very little bias when,least squares,cross validation was,used to select

  15. A New Computational Approach to Density Estimation with ...

    E-print Network

    2003-12-19

    tractable procedure to determine parameters such as bandwidth or smoothness weight. ..... g can be done easily by grid search and nonlinear programming techniques [12, 28]. .... The logarithmic barrier function is a convex function whose value diverges as X approaches ..... line) is close to the true density (

  16. An Asymmetric Kernel Estimator of Density Function for Stationary Associated Sequences

    Microsoft Academic Search

    Yogendra P. Chaubey; Isha Dewan; Jun Li

    2012-01-01

    Here, we apply the smoothing technique proposed by Chaubey et al. (2007) for the empirical survival function studied in Bagai and Prakasa Rao (1991) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some

  17. The Spectral Density Estimation of Stationary Time Series with Missing Data

    Microsoft Academic Search

    Jian Huang; Finbarr O'Sullivan

    2002-01-01

    The spectral estimation of unevenly sampled data has been widely investigated in astronomical and medical areas. However the investigations are usually carried out in the context of periodicity detection and deterministic signal. Here we consider estimating the spectral density of stationary time series with missing data. An asymptotically unbiased estimation approach is pro- posed. The simulations are used to compare

  18. Density estimation implications of increasing ambient noise on

    E-print Network

    Thomas, Len

    and classification Tiago A. Marques1 , Jessica Ward2 , Susan Jarvis2 , David Moretti2 , Ronald Morrissey2 , Nancy Di, University of St. Andrews, St. Andrews, Scotland, United Kingdom 2 Naval Undersea Warfare Center Division estimate the mean detection probability of detecting the animals or cues of interest. This is often done

  19. Wavelet-based functional linear mixed models: an application to measurement error–corrected distributed lag models

    PubMed Central

    Malloy, Elizabeth J.; Morris, Jeffrey S.; Adar, Sara D.; Suh, Helen; Gold, Diane R.; Coull, Brent A.

    2010-01-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1–7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants. PMID:20156988

  20. Wavelet-based Joint Estimation and Encoding of Depth-Image-based Representations for

    E-print Network

    Do, Minh N.

    ], lies in general in a seven-dimensional space. Each light ray travels along a line, which is described the third spatial dimension via stereoscopy but also by allowing them to move inside the 3D video and freely by a point (three dimensions), an angular orientation (two dimensions) and a time instant (one dimension

  1. 886 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 4, APRIL 1998 Wavelet-Based Statistical Signal Processing

    E-print Network

    processing techniques such as denoising and detection typically model the wavelet coefficients as independent and image processing. The wavelet domain provides a natural setting for many applications involving real886 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 4, APRIL 1998 Wavelet-Based Statistical

  2. 2744 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 11, NOVEMBER 2002 Bivariate Shrinkage Functions for Wavelet-Based

    E-print Network

    Selesnick, Ivan

    to the dual-tree complex wavelet coefficients. Index Terms--Bivariate shrinkage, image denoising, statistical of the wavelet transform coefficients of natural images and its application to the image denoising problem Functions for Wavelet-Based Denoising Exploiting Interscale Dependency Levent S¸endur, Student Member, IEEE

  3. Comparison Between Geometry-Based and Gabor-Wavelets-Based Facial Expression Recognition Using Multi-Layer Perceptron

    E-print Network

    Lyons, Michael J.

    -computer interface. An automatic FER system needs to solve the follow- ing problems: detection and location of faces- ficients at each point through image convolution. Compared with face recognition, there is relativelyComparison Between Geometry-Based and Gabor-Wavelets-Based Facial Expression Recognition Using

  4. Computerized Medical Imaging and Graphics 31 (2007) 18 Wavelet-based medical image compression with adaptive prediction

    E-print Network

    Chang, Pao-Chi

    2007-01-01

    Computerized Medical Imaging and Graphics 31 (2007) 1­8 Wavelet-based medical image compression; Medical image; Selection of predictor variables; Adaptive arithmetic coding; Multicollinearity problem 1. Introduction Medical images are a special category of images in their char- acteristics and purposes. Medical

  5. Wavelet-based multi-resolution analysis and artificial neural networks for forecasting temperature and thermal power consumption

    E-print Network

    Boyer, Edmond

    . Introduction The actual European energy context reveals that the building sector is one of the largest sectors and excessive energy consumption), managing energy demand, promoting renewable energy and finding waysWavelet-based multi-resolution analysis and artificial neural networks for forecasting temperature

  6. DIVERGENCE-FREE WAVELET BASES ON THE HYPERCUBE: FREE-SLIP BOUNDARY CONDITIONS, AND APPLICATIONS FOR SOLVING THE

    E-print Network

    Stevenson, Rob

    DIVERGENCE-FREE WAVELET BASES ON THE HYPERCUBE: FREE-SLIP BOUNDARY CONDITIONS, AND APPLICATIONS at the boundary. We give a simultaneous space-time variational formulation of the instationary Stokes equations. By equipping these Bochner spaces by ten- sor products of temporal and divergence-free spatial wavelets

  7. Wavelet-based local region-of-interest reconstruction for synchrotron radiation x-ray microtomography

    SciTech Connect

    Li Lingqi; Toda, Hiroyuki; Ohgaki, Tomomi; Kobayashi, Masakazu; Kobayashi, Toshiro; Uesugi, Kentaro; Suzuki, Yoshio [Department of Production Systems Engineering, Toyohashi University of Technology, Toyohashi, Aichi 441-8580 (Japan); Japan Synchrotron Radiation Research Institute, Sayo-gun, Hyougo 679-5198 (Japan)

    2007-12-01

    Synchrotron radiation x-ray microtomography is becoming a uniquely powerful method to nondestructively access three-dimensional internal microstructure in biological and engineering materials, with a resolution of 1 {mu}m or less. The tiny field of view of the detector, however, requires that the sample has to be strictly small, which would limit the practical applications of the method such as in situ experiments. In this paper, a wavelet-based local tomography algorithm is proposed to recover a small region of interest inside a large object only using the local projections, which is motivated by the localization property of wavelet transform. Local tomography experiment for an Al-Cu alloy is carried out at SPring-8, the third-generation synchrotron radiation facility in Japan. The proposed method readily enables the high-resolution observation for a large specimen, by which the applicability of the current microtomography would be promoted to a large extent.

  8. Wavelet-based image restoration for compact X-ray microscopy.

    PubMed

    Stollberg, H; Boutet de Monvel, J; Holmberg, A; Hertz, H M

    2003-08-01

    Compact water-window X-ray microscopy with short exposure times will always be limited on photons owing to sources of limited power in combination with low-efficency X-ray optics. Thus, it is important to investigate methods for improving the signal-to-noise ratio in the images. We show that a wavelet-based denoising procedure significantly improves the quality and contrast in compact X-ray microscopy images. A non-decimated, discrete wavelet transform (DWT) is applied to original, noisy images. After applying a thresholding procedure to the finest scales of the DWT, by setting to zero all wavelet coefficients of magnitude below a prescribed value, the inverse DWT to the thresholded DWT produces denoised images. It is concluded that the denoising procedure has potential to reduce the exposure time by a factor of 2 without loss of relevant image information. PMID:12887709

  9. Design of wavelet-based ECG detector for implantable cardiac pacemakers.

    PubMed

    Min, Young-Jae; Kim, Hoon-Ki; Kang, Yu-Ri; Kim, Gil-Su; Park, Jongsun; Kim, Soo-Won

    2013-08-01

    A wavelet Electrocardiogram (ECG) detector for low-power implantable cardiac pacemakers is presented in this paper. The proposed wavelet-based ECG detector consists of a wavelet decomposer with wavelet filter banks, a QRS complex detector of hypothesis testing with wavelet-demodulated ECG signals, and a noise detector with zero-crossing points. In order to achieve high detection accuracy with low power consumption, a multi-scaled product algorithm and soft-threshold algorithm are efficiently exploited in our ECG detector implementation. Our algorithmic and architectural level approaches have been implemented and fabricated in a standard 0.35 ?m CMOS technology. The testchip including a low-power analog-to-digital converter (ADC) shows a low detection error-rate of 0.196% and low power consumption of 19.02 ?W with a 3 V supply voltage. PMID:23893202

  10. Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis

    PubMed Central

    Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

    2014-01-01

    To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

  11. Leg Motion Classification with Artificial Neural Networks Using Wavelet-Based Features of Gyroscope Signals

    PubMed Central

    Ayrulu-Erdem, Birsel; Barshan, Billur

    2011-01-01

    We extract the informative features of gyroscope signals using the discrete wavelet transform (DWT) decomposition and provide them as input to multi-layer feed-forward artificial neural networks (ANNs) for leg motion classification. Since the DWT is based on correlating the analyzed signal with a prototype wavelet function, selection of the wavelet type can influence the performance of wavelet-based applications significantly. We also investigate the effect of selecting different wavelet families on classification accuracy and ANN complexity and provide a comparison between them. The maximum classification accuracy of 97.7% is achieved with the Daubechies wavelet of order 16 and the reverse bi-orthogonal (RBO) wavelet of order 3.1, both with similar ANN complexity. However, the RBO 3.1 wavelet is preferable because of its lower computational complexity in the DWT decomposition and reconstruction. PMID:22319378

  12. Wavelet-based correlations of impedance cardiography signals and heart rate variability

    NASA Astrophysics Data System (ADS)

    Podtaev, Sergey; Dumler, Andrew; Stepanov, Rodion; Frick, Peter; Tziberkin, Kirill

    2010-04-01

    The wavelet-based correlation analysis is employed to study impedance cardiography signals (variation in the impedance of the thorax z(t) and time derivative of the thoracic impedance (- dz/dt)) and heart rate variability (HRV). A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. The modulus of wavelet-correlation function shows the level of correlation, and the phase indicates the mean phase shift of oscillations at the given scale (frequency). Significant correlations essentially exceeding the values obtained for noise signals are defined within two spectral ranges, which correspond to respiratory activity (0.14-0.5 Hz), endothelial related metabolic activity and neuroendocrine rhythms (0.0095-0.02 Hz). Probably, the phase shift of oscillations in all frequency ranges is related to the peculiarities of parasympathetic and neuro-humoral regulation of a cardiovascular system.

  13. A new algorithm for wavelet-based heart rate variability analysis

    E-print Network

    García, Constantino A; Vila, Xosé; Márquez, David G

    2014-01-01

    One of the most promising non-invasive markers of the activity of the autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits often provide spectral analysis techniques using the Fourier transform, which assumes that the heart rate series is stationary. To overcome this issue, the Short Time Fourier Transform is often used (STFT). However, the wavelet transform is thought to be a more suitable tool for analyzing non-stationary signals than the STFT. Given the lack of support for wavelet-based analysis in HRV toolkits, such analysis must be implemented by the researcher. This has made this technique underutilized. This paper presents a new algorithm to perform HRV power spectrum analysis based on the Maximal Overlap Discrete Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any spectral band with a given tolerance for the band's boundaries. The MODWPT decomposition tree is pruned to avoid calculating unnecessary wavelet coefficients, thereby optimizing execution t...

  14. Corrosion in reinforced concrete panels: wireless monitoring and wavelet-based analysis.

    PubMed

    Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

    2014-01-01

    To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

  15. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  16. Online clutter estimation using a Gaussian kernel density estimator for target tracking

    Microsoft Academic Search

    X. Chen; R. Tharmarasa; T. Kirubarajan; M. Pelletier

    2011-01-01

    In this paper, based on non-homogeneous Poisson point processes (NHPP), a kernel clutter spatial intensity esti- mation method is proposed. Here, the clutter spatial intensity estimation problem is decomposed into two parts: (1) estimate the probability distribution of the clutter number per scan; (2) estimate the spatial variation of the clutter intensity in the measurement space. Under the NHPP assumption,

  17. Parameterization of density estimation in full waveform well-to-well tomography

    NASA Astrophysics Data System (ADS)

    Teranishi, K.; Mikada, H.; Goto, T. N.; Takekawa, J.

    2014-12-01

    Seismic full-waveform inversion (FWI) is a method for estimating mainly velocity structure in the subsurface. As wave propagation is influenced by elastic parameter Vp, Vs and density, it is necessary to include these parameters in the modeling and in the inversion (Virieux and Operto 2009). On the other hand, multi-parameter full waveform inversion is a challenging problem because parameters are coupled with each other, and the coupling effects prevent from the appropriate estimation of the elastic parameters. Especially, the estimation of density is of a very difficult exercise because plural elastic parameters including density increases the dimension of the solution space so that any minimization could be trapped in local minima. Therefore, density is usually estimated using an empirical formula such as Gardner's relationship (Gardner et al., 1974) or is fixed to a constant value. Almost all elastic FWI studies have neglected the influence of inverting density parameter because of its difficulty. Since the density parameter is directly included in elastic wave equation, it is necessary to see if it is possible to estimate density value exactly or not. Moreover, Gardner's relationship is an empirical equation and could not always show the exact relation between Vp and density, for example in media such as salt dome. Pre-salt exploration conducted in recent decades could accordingly be influences.The objective of this study is to investigate the feasibility of the estimation of density structure when inverting with the other elastic parameters and to see if density is separable from the other parameters. We perform 2D numerical simulations in order to see the most important factor in the inversion of density structure as well as Vp and Vs. We first apply a P-S separation scheme to obtain P and S wavefields to apply our waveform inversion scheme to estimate Vp and density distributions simultaneously. Then we similarly estimate, Vs and density distributions. We show the effect of the inversion strategies with different parameterization on the estimation of density structures.

  18. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Storm, G.L.; Cottam, D.F.; Yahner, R.H.; Nichols, J.D.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  19. Density estimation of Yangtze finless porpoises using passive acoustic sensors and automated click train detection.

    PubMed

    Kimura, Satoko; Akamatsu, Tomonari; Li, Songhai; Dong, Shouyue; Dong, Lijun; Wang, Kexiong; Wang, Ding; Arai, Nobuaki

    2010-09-01

    A method is presented to estimate the density of finless porpoises using stationed passive acoustic monitoring. The number of click trains detected by stereo acoustic data loggers (A-tag) was converted to an estimate of the density of porpoises. First, an automated off-line filter was developed to detect a click train among noise, and the detection and false-alarm rates were calculated. Second, a density estimation model was proposed. The cue-production rate was measured by biologging experiments. The probability of detecting a cue and the area size were calculated from the source level, beam patterns, and a sound-propagation model. The effect of group size on the cue-detection rate was examined. Third, the proposed model was applied to estimate the density of finless porpoises at four locations from the Yangtze River to the inside of Poyang Lake. The estimated mean density of porpoises in a day decreased from the main stream to the lake. Long-term monitoring during 466 days from June 2007 to May 2009 showed variation in the density 0-4.79. However, the density was fewer than 1 porpoise/km(2) during 94% of the period. These results suggest a potential gap and seasonal migration of the population in the bottleneck of Poyang Lake. PMID:20815477

  20. Density meter algorithm and system for estimating sampling/mixing uncertainty

    SciTech Connect

    Shine, E P

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

  1. Density meter algorithm and system for estimating sampling\\/mixing uncertainty

    Microsoft Academic Search

    Shine

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling\\/mixing and measurement uncertainties in the process and to provide a

  2. IMPROVING ESTIMATES OF BIRD DENSITY USING MULTIPLE-COVARIATE DISTANCE SAMPLING

    E-print Network

    Buckland, Steve

    1229 IMPROVING ESTIMATES OF BIRD DENSITY USING MULTIPLE- COVARIATE DISTANCE SAMPLING Tiago A detectability and therefore density. In the standard method, we model the probability of detecting a bird-transect survey of Hawaii Amakihi (Hemignathus virens). Received 2 June 2006, accepted 2 November 2006. Key words

  3. A PATIENT-SPECIFIC CORONARY DENSITY ESTIMATE R. Shahzad 1,2

    E-print Network

    van Vliet, Lucas J.

    -enhanced native CT scan and a high resolution contrast-enhanced CTA scan. The native scan is used for calcium A reliable density estimate for the position of the coronary arteries in Computed Tomography (CT) data in CT and CT angiography (CTA). The proposed method constructs a patient- specific coronary density

  4. The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition

    E-print Network

    Hays, Graeme

    The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition Thomas K scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb). These proximate data were subsequently converted to energy densities. The two techniques (bomb- calorimetry

  5. A Branch and Bound Algorithm for Finding the Modes in Kernel Density Estimates

    Microsoft Academic Search

    Oliver Wirjadi; Thomas M. Breuel

    2009-01-01

    Kernel density estimators are established tools in non-parametric statis- tics. Due to their exibility and ease of use, these methods are popular in computer vision and pattern recognition for tasks such as object tracking in video or image segmentation. The most frequently used algorithm for nding the modes in such densities (the mean shift) is a gradient ascent rule, which

  6. Early estimation of defect density using an in-process Haskell metrics model

    Microsoft Academic Search

    Mark Sherriff; Nachiappan Nagappan; Laurie Williams; Mladen Vouk

    2005-01-01

    Early estimation of defect density of a product is an important step towards the remediation of the problem associated with affordably guiding corrective actions in the software development process. This paper presents a suite of in-process metrics that leverages the software testing effort to create a defect density prediction model for use throughout the software development process. A case study

  7. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  8. A bound for the smoothing parameter in certain well-known nonparametric density estimators

    NASA Technical Reports Server (NTRS)

    Terrell, G. R.

    1980-01-01

    Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

  9. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  10. Estimations of bulk geometrically necessary dislocation density using high resolution EBSD.

    PubMed

    Ruggles, T J; Fullwood, D T

    2013-10-01

    Characterizing the content of geometrically necessary dislocations (GNDs) in crystalline materials is crucial to understanding plasticity. Electron backscatter diffraction (EBSD) effectively recovers local crystal orientation, which is used to estimate the lattice distortion, components of the Nye dislocation density tensor (?), and subsequently the local bulk GND density of a material. This paper presents a complementary estimate of bulk GND density using measurements of local lattice curvature and strain gradients from more recent high resolution EBSD (HR-EBSD) methods. A continuum adaptation of classical equations for the distortion around a dislocation are developed and used to simulate random GND fields to validate the various available approximations of GND content. PMID:23751207

  11. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  12. Density estimation of Bemisia tabaci (Hemiptera: Aleyrodidae) in a greenhouse using sticky traps in conjunction with an image processing system

    Microsoft Academic Search

    Mu Qiao; Jaehong Lim; Chang Woo Ji; Bu-Keun Chung; Hwang-Yong Kim; Ki-Baik Uhm; Cheol Soo Myung; Jongman Cho; Tae-Soo Chon

    2008-01-01

    Accurate forecasting of pest density is essential for effective pest management. In this study, a simple image processing system that automatically estimated the density of whiteflies on sticky traps was developed. The estimated densities of samples in a laboratory and a greenhouse were in accordance with the actual values. The detection system was especially efficient when the whitefly densities were

  13. Sea ice density estimation in the Bohai Sea using the hyperspectral remote sensing technology

    NASA Astrophysics Data System (ADS)

    Liu, Chengyu; Shao, Honglan; Xie, Feng; Wang, Jianyu

    2014-11-01

    Sea ice density is one of the significant physical properties of sea ice and the input parameters in the estimation of the engineering mechanical strength and aerodynamic drag coefficients; also it is an important indicator of the ice age. The sea ice in the Bohai Sea is a solid, liquid and gas-phase mixture composed of pure ice, brine pockets and bubbles, the density of which is mainly affected by the amount of brine pockets and bubbles. The more the contained brine pockets, the greater the sea ice density; the more the contained bubbles, the smaller the sea ice density. The reflectance spectrum in 350~2500 nm and density of sea ice of different thickness and ages were measured in the Liaodong Bay of the Bohai Sea during the glacial maximum in the winter of 2012-2013. According to the measured sea ice density and reflectance spectrum, the characteristic bands that can reflect the sea ice density variation were found, and the sea ice density spectrum index (SIDSI) of the sea ice in the Bohai Sea was constructed. The inversion model of sea ice density in the Bohai Sea which refers to the layer from surface to the depth of penetration by the light was proposed at last. The sea ice density in the Bohai Sea was estimated using the proposed model from Hyperion image which is a hyperspectral image. The results show that the error of the sea ice density inversion model is about 0.0004 g•cm-3. The sea ice density can be estimated through hyperspectral remote sensing images, which provide the data support to the related marine science research and application.

  14. Radio Science,Volume31, Number 1, Pages51-65, January-February1996 Wavelet-based methods for the nonlinear inverse

    E-print Network

    Willsky, Alan S.

    for the nonlinear inverse scattering problem using the extended Born approximation Eric L. Miller Centerfor: WAVELET-BASED NONLINEAR INVERSE SCATTERING EBA provides a simple functional relationship be- tween. In this paper, we presentan approachto the nonlinear inversescattering problemusingthe extended

  15. Estimating bulk density of compacted grains in storage bins and modifications of Janssen's load equations as affected by bulk density.

    PubMed

    Haque, Ekramul

    2013-03-01

    Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities with the bed heights. PMID:24804024

  16. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  17. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  18. A New Wavelet-based image denoising using undecimated discrete wavelet transform and least squares support vector machine

    Microsoft Academic Search

    Xiang-Yang Wang; Hong-Ying Yang; Zhong-Kai Fu

    2010-01-01

    Image denoising is an important image processing task, both as itself, and as a preprocessing in image processing pipeline. The least squares support vector machine (LS-SVM) has shown to exhibit excellent classification performance in many applications. Based on undecimated discrete wavelet transform, a new wavelet-based image denoising using LS-SVM is proposed in this paper. Firstly, the noisy image is decomposed

  19. On the design and multiplierless realization of perfect reconstruction triplet-based FIR filter banks and wavelet bases

    Microsoft Academic Search

    S. C. Chan; K. S. Yeung

    2004-01-01

    This paper proposes new methods for the efficient design and realization of perfect reconstruction (PR) two-channel finite-impulse response (FIR) triplet filter banks (FBs) and wavelet bases. It extends the linear-phase FIR triplet FBs of Ansari et al. to include FIR triplet FBs with lower system delay and a prescribed order of K regularity. The design problem using either the minimax

  20. Construction of compactly supported biorthogonal wavelet based on Human Visual System

    NASA Astrophysics Data System (ADS)

    Hu, Haiping; Hou, Weidong; Liu, Hong; Mo, Yu L.

    2000-11-01

    As an important analysis tool, wavelet transform has made a great development in image compression coding, since Daubechies constructed a kind of compact support orthogonal wavelet and Mallat presented a fast pyramid algorithm for wavelet decomposition and reconstruction. In order to raise the compression ratio and improve the visual quality of reconstruction, it becomes very important to find a wavelet basis that fits the human visual system (HVS). Marr wavelet, as it is known, is a kind of wavelet, so it is not suitable for implementation of image compression coding. In this paper, a new method is provided to construct a kind of compactly supported biorthogonal wavelet based on human visual system, we employ the genetic algorithm to construct compactly supported biorthogonal wavelet that can approximate the modulation transform function for HVS. The novel constructed wavelet is applied to image compression coding in our experiments. The experimental results indicate that the visual quality of reconstruction with the new kind of wavelet is equivalent to other compactly biorthogonal wavelets in the condition of the same bit rate. It has good performance of reconstruction, especially used in texture image compression coding.

  1. Matrix-free application of Hamiltonian operators in Coifman wavelet bases

    NASA Astrophysics Data System (ADS)

    Acevedo, Ramiro; Lombardini, Richard; Johnson, Bruce R.

    2010-06-01

    A means of evaluating the action of Hamiltonian operators on functions expanded in orthogonal compact support wavelet bases is developed, avoiding the direct construction and storage of operator matrices that complicate extension to coupled multidimensional quantum applications. Application of a potential energy operator is accomplished by simple multiplication of the two sets of expansion coefficients without any convolution. The errors of this coefficient product approximation are quantified and lead to use of particular generalized coiflet bases, derived here, that maximize the number of moment conditions satisfied by the scaling function. This is at the expense of the number of vanishing moments of the wavelet function (approximation order), which appears to be a disadvantage but is shown surmountable. In particular, application of the kinetic energy operator, which is accomplished through the use of one-dimensional (1D) [or at most two-dimensional (2D)] differentiation filters, then degrades in accuracy if the standard choice is made. However, it is determined that use of high-order finite-difference filters yields strongly reduced absolute errors. Eigensolvers that ordinarily use only matrix-vector multiplications, such as the Lanczos algorithm, can then be used with this more efficient procedure. Applications are made to anharmonic vibrational problems: a 1D Morse oscillator, a 2D model of proton transfer, and three-dimensional vibrations of nitrosyl chloride on a global potential energy surface.

  2. Wave propagation analysis in carbon nanotube embedded composite using wavelet based spectral finite elements

    NASA Astrophysics Data System (ADS)

    Mitra, Mira; Gopalakrishnan, S.

    2006-02-01

    In this paper, elastic wave propagation is studied in a nanocomposite reinforced with multiwall carbon nanotubes (CNTs). Analysis is performed on a representative volume element of square cross section. The frequency content of the exciting signal is at the terahertz level. Here, the composite is modeled as a higher order shear deformable beam using layerwise theory, to account for partial shear stress transfer between the CNTs and the matrix. The walls of the multiwall CNTs are considered to be connected throughout their length by distributed springs, whose stiffness is governed by the van der Waals force acting between the walls of nanotubes. The analyses in both the frequency and time domains are done using the wavelet-based spectral finite element method (WSFEM). The method uses the Daubechies wavelet basis approximation in time to reduce the governing PDE to a set of ODEs. These transformed ODEs are solved using a finite element (FE) technique by deriving an exact interpolating function in the transformed domain to obtain the exact dynamic stiffness matrix. Numerical analyses are performed to study the spectrum and dispersion relations for different matrix materials and also for different beam models. The effects of partial shear stress transfer between CNTs and matrix on the frequency response function (FRF) and the time response due to broadband impulse loading are investigated for different matrix materials. The simultaneous existence of four coupled propagating modes in a double-walled CNT-composite is also captured using modulated sinusoidal excitation.

  3. Wavelet-based double-difference seismic tomography with sparsity regularization

    NASA Astrophysics Data System (ADS)

    Fang, Hongjian; Zhang, Haijiang

    2014-11-01

    We have developed a wavelet-based double-difference (DD) seismic tomography method. Instead of solving for the velocity model itself, the new method inverts for its wavelet coefficients in the wavelet domain. This method takes advantage of the multiscale property of the wavelet representation and solves the model at different scales. A sparsity constraint is applied to the inversion system to make the set of wavelet coefficients of the velocity model sparse. This considers the fact that the background velocity variation is generally smooth and the inversion proceeds in a multiscale way with larger scale features resolved first and finer scale features resolved later, which naturally leads to the sparsity of the wavelet coefficients of the model. The method is both data- and model-adaptive because wavelet coefficients are non-zero in the regions where the model changes abruptly when they are well sampled by ray paths and the model is resolved from coarser to finer scales. An iteratively reweighted least squares procedure is adopted to solve the inversion system with the sparsity regularization. A synthetic test for an idealized fault zone model shows that the new method can better resolve the discontinuous boundaries of the fault zone and the velocity values are also better recovered compared to the original DD tomography method that uses the first-order Tikhonov regularization.

  4. Adaptive Audio Watermarking via the Optimization Point of View on the Wavelet-Based Entropy

    E-print Network

    Chen, Shuo-Tsung; Chen, Chur-Jen

    2011-01-01

    This study aims to present an adaptive audio watermarking method using ideas of wavelet-based entropy (WBE). The method converts low-frequency coefficients of discrete wavelet transform (DWT) into the WBE domain, followed by the calculations of mean values of each audio as well as derivation of some essential properties of WBE. A characteristic curve relating the WBE and DWT coefficients is also presented. The foundation of the embedding process lies on the approximately invariant property demonstrated from the mean of each audio and the characteristic curve. Besides, the quality of the watermarked audio is optimized. In the detecting process, the watermark can be extracted using only values of the WBE. Finally, the performance of the proposed watermarking method is analyzed in terms of signal to noise ratio, mean opinion score and robustness. Experimental results confirm that the embedded data are robust to resist the common attacks like re-sampling, MP3 compression, low-pass filtering, and amplitude-scaling

  5. Wavelet Based ECG Steganography for Protecting Patient Confidential Information in Point-of-Care Systems.

    PubMed

    Ibaida, Ayman; Khalil, Ibrahim

    2013-05-21

    With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as Point-of-Care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by Body Sensor Networks (BSNs) from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level etc. and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data is being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the Percentage Residual Difference (PRD) and the Wavelet Weighted PRD (WWPRD). It is found that the proposed technique provides high security protection for patients data with low (less than 1% ) distortion and ECG data remains diagnosable after watermarking (i.e. hiding patient confidential data) and as well as after watermarks (i.e. hidden data) are removed from the watermarked data. PMID:23708767

  6. Incipient interturn fault diagnosis in induction machines using an analytic wavelet-based optimized Bayesian inference.

    PubMed

    Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan

    2014-05-01

    Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044

  7. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  8. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis

    PubMed Central

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

  9. Online Epileptic Seizure Prediction Using Wavelet-Based Bi-Phase Correlation of Electrical Signals Tomography.

    PubMed

    Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh

    2015-09-01

    Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients. PMID:26126613

  10. REVISION OF RELATIVE DENSITY AND ESTIMATION OF LIQUEFACTION STRENGTH OF SANDY SOIL WITH FINE CONTENT

    NASA Astrophysics Data System (ADS)

    Nakazawa, Hiroshi; Haradah, Kenji

    It is generally known that liquefaction strength obtained from undrained cyclic triaxial test is influenced by various factors such as relative densities, fine content, grain size distributions and plasticity indexes. However, It is difficult to estimate liquefaction strength for various soil types from same physical properties. In order to estimate the liquefaction strength of various soil types such as silt, silty sands and clean sands, this study showed a method to revice relative density of sandy soil including more than 15% of fine content and the correlation between reviced relative density and void ratio ranges obtaind from maximum and minimum void ratio. Then, the relationships between void ratio ranges and liquefaction strengths from other studies was considered. As a result, the defference of liquefaction strength between reconstituted and undisturbed samples was recognized from the correlations revised relative density using void ratio ranges and fine content.

  11. Effects of tissue heterogeneity on the optical estimate of breast density.

    PubMed

    Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2012-10-01

    Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635-1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

  12. Estimation of Low-Concentration Magnetic Fluid Density with Gmr Sensor

    NASA Astrophysics Data System (ADS)

    Yamada, S.; Gooneratne, C.; Chomsuwan, K.; Iwahara, M.; Kakikawa, M.

    2008-02-01

    This paper describes a new application of a spin-valve type giant magnetoresistance sensor in the biomedical field. The hyperthermia treatment, based on the hysteresis loss of magnetite under external ac fields, requires determination of the content density of magnetite injected inside the body to control the heat capacity. We propose a low-invasive methodology to estimate the density of magnetite by measuring magnetic fields inside the cavity. For this purpose, we investigated the relationship between the density of magnetite and the magnetic fields, and developed a needle-type magnetic probe with a giant magnetoresistance sensor for low-invasive measurement. The experimental results demonstrate the possibility of estimating the low-concentration density of magnetite injected into the body.

  13. Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

    NASA Astrophysics Data System (ADS)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

  14. Kernel estimates for one- and two-dimensional ion channel dwell-time densities.

    PubMed Central

    Rosales, Rafael A; Fitzgerald, William J; Hladky, Stephen B

    2002-01-01

    In this paper, we compare nonparametric kernel estimates with smoothed histograms as methods for displaying logarithmically transformed dwell-time distributions. Kernel density plots provide a simpler means for producing estimates of the probability density function (pdf) and they have the advantage of being smoothed in a well-specified, carefully controlled manner. Smoothing is essential for multidimensional plots because, with realistic amounts of data, the number of counts per bin is small. Examples are presented for a 2-dimensional pdf and its associated dependency-difference plot that display the correlations between successive dwell times. PMID:11751293

  15. New density estimates of a threatened sifaka species (Propithecus coquereli) in Ankarafantsika National Park.

    PubMed

    Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès

    2014-06-01

    Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250

  16. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

  17. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J.A.; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  18. Estimating food portions. Influence of unit number, meal type and energy density????

    PubMed Central

    Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A.

    2013-01-01

    Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7–26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled “meal”, 4 “drink” and 16 “snack”. Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p = 0.01). Single unit foods and those labelled as ‘meal’ or ‘beverage’ were estimated with greater error than multi-unit and ‘snack’ foods (p = 0.02 and p < 0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30–46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948

  19. Autocorrelation-based estimate of particle image density in particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Warner, Scott O.

    In Particle Image Velocimetry (PIV), the number of particle images per interrogation region, or particle image density, impacts the strength of the correlation and, as a result, the number of valid vectors and the measurement uncertainty. Therefore, any a-priori estimate of the accuracy and uncertainty of PIV requires knowledge of the particle image density. An autocorrelation-based method for estimating the local, instantaneous, particle image density is presented. Synthetic images were used to develop an empirical relationship based on how the autocorrelation peak magnitude varies with particle image density, particle image diameter, illumination intensity, interrogation region size, and background noise. This relationship was then tested using images from two experimental setups with different seeding densities and flow media. The experimental results were compared to image densities obtained through using a local maximum method as well as manual particle counts and are found to be robust. The effect of varying particle image intensities was also investigated and is found to affect the particle image density.

  20. Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.

    2008-03-01

    Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.

  1. Server-Side Prediction of Source IP Addresses Using Density Estimation

    Microsoft Academic Search

    Markus Goldstein; Matthias Reif; Armin Stahl; Thomas M. Breuel

    2009-01-01

    Source IP addresses are often used as a major feature for user modeling in computer networks. Particularly in the field of Distributed Denial of Service (DDoS) attack detection and mitigation traffic models make extensive use of source IP addresses for detecting anomalies. Typically the real IP address distribution is strongly undersampled due to a small amount of observations. Density estimation

  2. Density Estimation for Protein Conformation Angles Using a Bivariate von Mises Distribution

    E-print Network

    Vannucci, Marina

    in predicting protein backbone conformational angles has prompted the development of modeling and inference and Bayesian Nonparametrics Kristin P. LENNOX, David B. DAHL, Marina VANNUCCI, and Jerry W. TSAI Interest for sampling from the posterior predictive distribution. We show how our density estimation method makes

  3. Mixture Kalman Filter Based Highway Congestion Mode and Vehicle Density Estimator and its Application

    E-print Network

    Horowitz, Roberto

    Mixture Kalman Filter Based Highway Congestion Mode and Vehicle Density Estimator and its In today's metropolitan areas, highway traffic conges- tion occurs regularly during rush hours. In addition causes inefficient operation of highways, waste of resources, increased air pollution, and intensified

  4. COMBINING BREEDING BIRD SURVEY AND DISTANCE SAMPLING TO ESTIMATE DENSITY OF MIGRANT AND BREEDING BIRDS

    Microsoft Academic Search

    Scott G. Somershoe; Daniel J. Twedt; Bruce Reid

    2006-01-01

    2 Audubon Mississippi, 1208 Washington Street, Vicksburg, MS 39183 Abstract. We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities

  5. Dictionary-based probability density function estimation for high-resolution SAR data

    E-print Network

    Paris-Sud XI, Université de

    Dictionary-based probability density function estimation for high-resolution SAR data Vladimir synthetic aperture radar (SAR) images. This method is an extension of previously existing method for lower). The proposed dictionary consists of eight state-of-the-art SAR- specific pdfs: Nakagami, log

  6. Estimating cetacean density from passive acoustic arrays Tiago A. Marques and Len Thomas

    E-print Network

    Marques, Tiago A.

    Estimating cetacean density from passive acoustic arrays Tiago A. Marques and Len Thomas Centre shortcomings (see box "Visual vs. Acoustic surveys") and acoustic methods might be a better alternative under some settings. Using towed acoustic arrays (see box "Types of passive acoustic devices") rather than

  7. The minimum description length principle for probability density estimation by regular histograms

    Microsoft Academic Search

    François Chapeau-Blondeau; David Rousseau

    2009-01-01

    The minimum description length principle is a general methodology for statistical modeling and inference that selects the best explanation for observed data as the one allowing the shortest description of them. Application of this principle to the important task of probability density estimation by histograms was previously proposed. We review this approach and provide additional illustrative examples and an application

  8. Sparse probability density function estimation using the minimum integrated square error

    E-print Network

    Chen, Sheng

    mixture model [1] is a general approach to the probability density function (PDF) estimation problem extensive computation, but for the Gaussian mixture model, the EM algorithm can be derived in an explicit to apply resampling techniques [11­14]. In general, the correct number of mixture components is unknown

  9. Early Estimation of Defect Density Using an In-Process Haskell Metrics Model

    E-print Network

    Sherriff, Mark S.

    Early Estimation of Defect Density Using an In-Process Haskell Metrics Model Mark Sherriff1 of the problem associated with affordably guiding corrective actions in the software development process. This paper presents a suite of in-process metrics that leverages the software testing effort to create

  10. Estimating the effect of Earth elasticity and variable water density on tsunami speeds

    E-print Network

    Tsai, Victor C.

    Estimating the effect of Earth elasticity and variable water density on tsunami speeds Victor C; revised 25 December 2012; accepted 7 January 2013; published 13 February 2013. [1] The speed of tsunami comparisons of tsunami arrival times from the 11 March 2011 tsunami suggest, however, that the standard

  11. A hybrid approach to crowd density estimation using statistical leaning and texture classification

    NASA Astrophysics Data System (ADS)

    Li, Yin; Zhou, Bowen

    2013-12-01

    Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.

  12. Granularity Adaptive Density Estimation and on Demand Clustering of Concept-Drifting Data Streams

    E-print Network

    Pei, Jian

    Granularity Adaptive Density Estimation and on Demand Clustering of Concept-Drifting Data Streams clustering concept drifting data streams. In order to characterize concept drifting data streams, we propose clustering concept drifting data streams, which is illustrated in the following example. Example 1

  13. Estimating the density of a possibly missing response variable in nonlinear regression

    E-print Network

    Mueller, Uschi

    Estimating the density of a possibly missing response variable in nonlinear regression Ursula U. M is particularly relevant in situations Email address: uschi@stat.tamu.edu (Ursula U. M¨uller) URL: http://www.stat.tamu.edu/uschi/ (Ursula U. M¨uller) 1Ursula U. M¨uller was supported by NSF Grant DMS-0907014. Preprint submitted

  14. Evaluating field-scale sampling methods for the estimation of mean plant densities of weeds

    Microsoft Academic Search

    NC OLBACH

    2000-01-01

    Summary The 1 weed flora (comprising seven species) of a field continuously grown with soyabean was simulated for 4 years, using semivariograms established from previous field observations. Various sampling methods were applied and compared for accurately estimating mean plant densities, for diÄering weed species and years. The tested methods were based on (a) random selection wherein samples were chosen either

  15. Estimating group size and population density of Eurasian badgers Meles meles by quantifying latrine use

    Microsoft Academic Search

    F. A. M. TUYTTENS; B. LONG; T. FAWCETT; A. SKINNER; J. A. BROWN; C. L. CHEESEMAN; A. W. RODDAM; D. W. MACDONALD

    Summary 1. Conservation issues and a potential role in disease transmission generate the continued need to census Eurasian badgers Meles meles , but direct counts and sett counts present difficulties. The feasibility of estimating social group size and population density of badgers by quantifying their use of latrines was evaluated. 2. The number of latrines, or preferably the number of

  16. Did the middle class shrink during the 1980s? UK evidence from kernel density estimates

    Microsoft Academic Search

    Stephen P. Jenkins

    1995-01-01

    This paper proposes using kernel density estimation methods to investigate the shrinking middle class hypothesis. The approach reveals striking new evidence of changes in the concentration of middle incomes in the United Kingdom during the 1980s. Breakdowns by family economic status demonstrate that a major cause of the aggregate changes was a moving apart of the income distributions for working

  17. The Spectral Density Estimation of Stationary Time Series with Missing Data

    E-print Network

    Schellekens, Michel P.

    : In climatological studies, the anomaly data can usually be modeled as stationary time series (after removing some trend and seasonal components). Missing observations are common in climatological data. 1 #12;NowThe Spectral Density Estimation of Stationary Time Series with Missing Data Jian Huang and Finbarr

  18. Probability Density Estimation using Isocontours and Isosurfaces: Application to Information Theoretic

    E-print Network

    Banerjee, Arunava

    with partial volume interpolation, Parzen windows, etc. under fine intensity quantization for affine image density estimator. Our approach requires the selection of only an image interpolant. The method neither Theoretic Image Registration Ajit Rajwade, Arunava Banerjee and Anand Rangarajan, Department of CISE

  19. Density estimation via cross-validation: Model selection point of view

    E-print Network

    framework. Extensively used in practice, cross-validation (CV) remains poorly understood, especially to increase with p. A theoretical assessment of the CV performance is carried out thanks to two oracle: Density estimation, cross-validation, model selection, leave-p-out, random penalty, oracle inequality

  20. USING AERIAL HYPERSPECTRAL REMOTE SENSING IMAGERY TO ESTIMATE CORN PLANT STAND DENSITY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Since corn plant stand density is important for optimizing crop yield, several researchers have recently developed ground-based systems for automatic measurement of this crop growth parameter. Our objective was to use data from such a system to assess the potential for estimation of corn plant stan...

  1. Brain tumor cell density estimation from multi-modal MR images based on a synthetic

    E-print Network

    Prastawa, Marcel

    Brain tumor cell density estimation from multi-modal MR images based on a synthetic tumor growth. Abstract. This paper proposes to employ a detailed tumor growth model to synthesize labelled images which can then be used to train an efficient data-driven machine learning tumor predictor. Our MR im- age

  2. Breast Percent Density: Estimation on Digital Mammograms and Central Tomosynthesis Projections

    PubMed Central

    Bakic, Predrag R.; Carton, Ann-Katherine; Kontos, Despina; Zhang, Cuiping; Troxel, Andrea B.; Maidment, Andrew D. A.

    2009-01-01

    Purpose: To evaluate inter- and intrareader agreement in breast percent density (PD) estimation on clinical digital mammograms and central digital breast tomosynthesis (DBT) projection images. Materials and Methods: This HIPAA-compliant study had institutional review board approval; all patients provided informed consent. Breast PD estimation was performed on the basis of anonymized digital mammograms and central DBT projections in 39 women (mean age, 51 years; range, 31–80 years). All women had recently detected abnormalities or biopsy-proved cancers. PD was estimated by three experienced readers on the mediolateral oblique views of the contralateral breasts by using software; each reader repeated the estimation after 2 months. Spearman correlations of inter- and intrareader and intermodality PD estimates, as well as ? statistics between categoric PD estimates, were computed. Results: High correlation (? = 0.91) was observed between PD estimates on digital mammograms and those on central DBT projections, averaged over all estimations; the corresponding ? coefficient (0.79) indicated substantial agreement. Mean interreader agreement for PD estimation on central DBT projections (? = 0.85 ± 0.05 [standard deviation]) was significantly higher (P < .01) than that for PD estimation on digital mammograms (? = 0.75 ± 0.05); the corresponding ? coefficients indicated substantial (? = 0.65 ± 0.12) and moderate (? = 0.55 ± 0.14) agreement for central DBT projections and digital mammograms, respectively. Conclusion: High correlation between PD estimates on digital mammograms and those on central DBT projections suggests the latter could be used until a method for PD estimation based on three-dimensional reconstructed images is introduced. Moreover, clinical PD estimation is possible with reduced radiation dose, as each DBT projection was acquired by using about 22% of the dose for a single mammographic projection. © RSNA, 2009 PMID:19420321

  3. Wavelet-based neural network with fuzzy-logic adaptivity for nuclear image restoration

    SciTech Connect

    Qian, W.; Clarke, L.P. [Univ. of South Florida, Tampa, FL (United States)] [Univ. of South Florida, Tampa, FL (United States)

    1996-10-01

    A novel wavelet-based neural network with fuzzy-logic adaptivity (WNNFA) is proposed for image restoration using a nuclear medicine gamma camera based on the measured system point spread function. The objective is to restore image degradation due to photon scattering and collimator photon penetration with the gamma camera and allow improved quantitative external measurements of radionuclides in vivo. The specific clinical model proposed is the imaging of bremsstrahlung radiation using {sup 32}P and {sup 90}Y because of the enhanced image degradation effects of photon scattering, photon penetration and poor signal-to-noise ratio (SNR) in measurements of this type with the gamma camera. The theoretical basis for four-channel multiresolution wavelet decomposition of the nuclear image into different subimages is developed with the objective of isolating the signal from noise. A fuzzy rule is generated to train a membership function using least mean squares (LMS) to obtain an optimal balance between image restoration and the stability of the neutral network (NN), while maintaining a linear response for the camera to radioactivity dose. A multichannel modified Hopfield neural network (HNN) architecture is then proposed for multichannel image restoration using the dominant signal subimages. This algorithm model avoids the common inverse problem associated with other image restoration filters such as the Wiener filter. The relative performance of the WNNFA for image restoration is compared to a previously reported order statistic neural network hybrid (OSNNH) filter by these investigators and a traditional Weiner filter and a modified HNN using simulated degraded images with different noise levels. Quantitative metrics such as the normalized mean square error (NMSE) and SNR are used to compare filter performance.

  4. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  5. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing.

    PubMed

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R

    2014-10-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  6. Bioenergetics estimate of the effects of stocking density on hatchery production of smallmouth bass fingerlings

    USGS Publications Warehouse

    Robel, G.L.; Fisher, W.L.

    1999-01-01

    Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

  7. Density estimation of small-mammal populations using a trapping web and distance sampling methods

    USGS Publications Warehouse

    Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.

    1983-01-01

    Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

  8. Biases in velocity and Q estimates from 3D density structure

    NASA Astrophysics Data System (ADS)

    P?onka, Agnieszka; Fichtner, Andreas

    2015-04-01

    We propose to develop a seismic tomography technique that directly inverts for density, using complete seismograms rather than arrival times of certain waves only. The first task in this challenge is to systematically study the imprints of density on synthetic seismograms. To compute the full seismic wavefield in a 3D heterogeneous medium without making significant approximations, we use numerical wave propagation based on a spectral-element discretization of the seismic wave equation. We consider a 2000 by 1000 km wide and 500 km deep spherical section, with the 1D Earth model PREM (with 40 km crust thickness) as a background. Onto this (in the uppermost 40 km) we superimpose 3D randomly generated velocity and density heterogeneities of various magnitudes and correlation lengths. We use different random realizations of heterogeneity distribution. We compare the synthetic seismograms for 3D velocity and density structure with 3D velocity structure and with the 1D background, calculating relative amplitude differences and timeshifts as functions of time and frequency. For 3D density variations of 7 % relative to PREM, the biggest time shifts reach 2.5 s, and the biggest relative amplitude differences approach 90 %. Based on the experimental changes in arrival times and amplitudes, we quantify the biases introduced in velocity and Q estimates when 3D density is not taken into account. For real data the effects may be more severe, given that commonly observed crustal velocity variations of 10-20 % suggest density variations of around 15 % in the upper crust. Our analyses indicate that reasonably sized density variations within the crust can leave a strong imprint on both traveltimes and amplitudes. While this can produce significant biases in velocity and Q estimates, the positive conclusion is that seismic waveform inversion for density may become feasible.

  9. On the estimation of E-region density profiles using IDA4D and COSMIC occultations

    NASA Astrophysics Data System (ADS)

    Nicolls, M. J.; Rodrigues, F. S.; Bust, G. S.; Crowley, G.

    2008-12-01

    The E-region density is one of the key elements in the development of equatorial spread F. The linear growth rate of the generalized Rayleigh-Taylor instability depends heavily on the flux-tube integrated E-region Pedersen conductivity (e.g. Sultan,1996). Therefore, good estimates of E-region densities are necessary for a better understanding of ESF phenomenology. In this study, we investigate the estimation of E-region density profiles obtained with the Ionospheric Data Assimilation Four Dimensional (IDA4D). The profiles will be obtained by a) direct assimilation of radio occultation data into IDA4D and b) by using IDA4D F-region density results to assist the inversion of E-region profiles from occultation measurements. The results are compared with independent measurements of E-region density profiles made by the bistatic coherent scatter radar experiment in Peru (e.g. Hysell and Chau, 2001). Additionally, we present an analysis of the accuracy and variability of the E-region density predictions of the numerical models used to generate the background (initial state) ionosphere for IDA4D inversions (TIMEGCM and IRI).

  10. Quantitative analysis for breast density estimation in low dose chest CT scans.

    PubMed

    Moon, Woo Kyung; Lo, Chung-Ming; Goo, Jin Mo; Bae, Min Sun; Chang, Jung Min; Huang, Chiun-Sheng; Chen, Jeon-Hor; Ivanova, Violeta; Chang, Ruey-Feng

    2014-03-01

    A computational method was developed for the measurement of breast density using chest computed tomography (CT) images and the correlation between that and mammographic density. Sixty-nine asymptomatic Asian women (138 breasts) were studied. With the marked lung area and pectoralis muscle line in a template slice, demons algorithm was applied to the consecutive CT slices for automatically generating the defined breast area. The breast area was then analyzed using fuzzy c-mean clustering to separate fibroglandular tissue from fat tissues. The fibroglandular clusters obtained from all CT slices were summed then divided by the summation of the total breast area to calculate the percent density for CT. The results were compared with the density estimated from mammographic images. For CT breast density, the coefficient of variations of intraoperator and interoperator measurement were 3.00 % (0.59 %-8.52 %) and 3.09 % (0.20 %-6.98 %), respectively. Breast density measured from CT (22 ± 0.6 %) was lower than that of mammography (34 ± 1.9 %) with Pearson correlation coefficient of r=0.88. The results suggested that breast density measured from chest CT images correlated well with that from mammography. Reproducible 3D information on breast density can be obtained with the proposed CT-based quantification methods. PMID:24643751

  11. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation?

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  12. Density!

    NSDL National Science Digital Library

    Miss Witcher

    2011-10-06

    What is Density? Density is the amount of "stuff" in a given "space". In science terms that means the amount of "mass" per unit "volume". Using units that means the amount of "grams" per "centimeters cubed". Check out the following links and learn about density through song! Density Beatles Style Density Chipmunk Style Density Rap Enjoy! ...

  13. Density estimation in a wolverine population using spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  14. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  15. Mammographic density and estimation of breast cancer risk in intermediate risk population.

    PubMed

    Tesic, Vanja; Kolaric, Branko; Znaor, Ariana; Kuna, Sanja Kusacic; Brkljacic, Boris

    2013-01-01

    It is not clear to what extent mammographic density represents a risk factor for breast cancer among women with moderate risk for disease. We conducted a population-based study to estimate the independent effect of breast density on breast cancer risk and to evaluate the potential of breast density as a marker of risk in an intermediate risk population. From November 2006 to April 2009, data that included American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) breast density categories and risk information were collected on 52,752 women aged 50-69 years without previously diagnosed breast cancer who underwent screening mammography examination. A total of 257 screen-detected breast cancers were identified. Logistic regression was used to assess the effect of breast density on breast carcinoma risk and to control for other risk factors. The risk increased with density and the odds ratio for breast cancer among women with dense breast (heterogeneously and extremely dense breast), was 1.9 (95% confidence interval, 1.3-2.8) compared with women with almost entirely fat breasts, after adjustment for age, body mass index, age at menarche, age at menopause, age at first childbirth, number of live births, use of oral contraceptive, family history of breast cancer, prior breast procedures, and hormone replacement therapy use that were all significantly related to breast density (p < 0.001). In multivariate model, breast cancer risk increased with age, body mass index, family history of breast cancer, prior breast procedure and breast density and decreased with number of live births. Our finding that mammographic density is an independent risk factor for breast cancer indicates the importance of breast density measurements for breast cancer risk assessment also in moderate risk populations. PMID:23173778

  16. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ? 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ? 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

  17. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  18. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  19. Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration

    PubMed Central

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2010-01-01

    We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876

  20. Combining Breeding Bird Survey and distance sampling to estimate density of migrant and breeding birds

    USGS Publications Warehouse

    Somershoe, S.G.; Twedt, D.J.; Reid, B.

    2006-01-01

    We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

  1. Bayesian nonparametric regression and density estimation using integrated nested Laplace approximations

    PubMed Central

    Wang, Xiao-Feng

    2013-01-01

    Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain Monte Carlo techniques, provides accurate approximations to estimate posterior marginals and avoid time-consuming sampling. We show here that two classical nonparametric smoothing problems, nonparametric regression and density estimation, can be achieved using INLA. Simulated examples and R functions are demonstrated to illustrate the use of the methods. Some discussions on potential applications of INLA are made in the paper. PMID:24416633

  2. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  3. Multiscale seismic characterization of marine sediments by using a wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique

    2015-04-01

    We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lévy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare the results with the multiscale analysis performed on synthetic seismic data based on ground truth measurements.

  4. Wavelet-based multiscale window transform and energy and vorticity analysis

    NASA Astrophysics Data System (ADS)

    Liang, Xiang San

    A new methodology, Multiscale Energy and Vorticity Analysis (MS-EVA), is developed to investigate sub-mesoscale, meso-scale, and large-scale dynamical interactions in geophysical fluid flows which are intermittent in space and time. The development begins with the construction of a wavelet-based functional analysis tool, the multiscale window transform (MWT), which is local, orthonormal, self-similar, and windowed on scale. The MWT is first built over the real line then modified onto a finite domain. Properties are explored, the most important one being the property of marginalization which brings together a quadratic quantity in physical space with its phase space representation. Based on MWT the MS-EVA is developed. Energy and enstrophy equations for the large-, meso-, and sub-meso-scale windows are derived and their terms interpreted. The processes thus represented are classified into four categories: transport; transfer, conversion, and dissipation/diffusion. The separation of transport from transfer is made possible with the introduction of the concept of perfect transfer. By the property of marginalization, the classical energetic analysis proves to be a particular case of the MS-EVA. The MS-EVA developed is validated with classical instability problems. The validation is carried out through two steps. First, it is established that the barotropic and baroclinic instabilities are indicated by the spatial averages of certain transfer term interaction analyses. Then calculations of these indicators are made with an Eady model and a Kuo model. The results agree precisely with what is expected from their analytical solutions, and the energetics reproduced reveal a consistent and important aspect of the unknown dynamic structures of instability processes. As an application, the MS-EVA is used to investigate the Iceland-Faeroe frontal (IFF) variability. A MS-EVA-ready dataset is first generated, through a forecasting study with the Harvard Ocean Prediction System using the data gathered during the 1993 NRV Alliance cruise. The application starts with a determination of the scale window bounds, which characterize a double-peak structure in either the time wavelet spectrum or the space wavelet spectrum. The resulting energetics, when locally averaged, reveal that there is a clear baroclinic instability happening around the cold tongue intrusion observed in the forecast. Moreover, an interaction analysis shows that the energy released by the instability indeed goes to the meso-scale window and fuel the growth of the intrusion. The sensitivity study shows that, in this case, the key to a successful application is a correct decomposition of the large-scale window from the meso-scale window.

  5. Wavelet-based multiscale resolution analysis of real and simulated time-series of earthquakes

    NASA Astrophysics Data System (ADS)

    Enescu, Bogdan; Ito, Kiyoshi; Struzik, Zbigniew R.

    2006-01-01

    This study introduces a new approach (based on the Continuous Wavelet Transform Modulus Maxima method) to describe qualitatively and quantitatively the complex temporal patterns of seismicity, their multifractal and clustering properties in particular. Firstly, we analyse the temporal characteristics of intermediate-depth seismic activity (M>= 2.6 events) in the Vrancea region, Romania, from 1974 to 2002. The second case studied is the shallow, crustal seismicity (M>= 1.5 events), which occurred from 1976 to 1995 in a relatively large region surrounding the epicentre of the 1995 Kobe earthquake (Mw= 6.9). In both cases we have declustered the earthquake catalogue and selected only the events with M>=Mc (where Mc is the magnitude of completeness) before analysis. The results obtained in the case of the Vrancea region show that for a relatively large range of scales, the process is nearly monofractal and random (does not display correlations). For the second case, two scaling regions can be readily noticed. At small scales (i.e. hours to days) the series display multifractal behaviour, while at larger scales (days to several years) we observe monofractal scaling. The Hölder exponent for the monofractal region is around 0.8, which would indicate the presence of long-range dependence (LRD). This result might be the consequence of the complex oscillatory or power-law trends of the analysed time-series. In order to clarify the interpretation of the above results, we consider two `artificial' earthquake sequences. Firstly, we generate a `low productivity' earthquake catalogue, by using the epidemic-type aftershock sequence (ETAS) model. The results, as expected, show no significant LRD for this simulated process. We also generate an event sequence by considering a 70 km long and 17.5 km deep fault, which is divided into square cells with dimensions of 550 m and is embedded in a 3-D elastic half-space. The simulated catalogue of this study is identical to the case (A), described by Eneva & Ben-Zion. The series display clear quasi-periodic behaviour, as revealed by simple statistical tests. The result of the wavelet-based multifractal analysis shows several distinct scaling domains. We speculate that each scaling range corresponds to a different periodic trend of the time-series.

  6. Vehicle density and communication load estimation in mobile radio local area networks (MR-LANs)

    Microsoft Academic Search

    W. Kremer; Wolfgang KremerZ; FORD Werke

    1992-01-01

    The communication bandwidth of MR-LANs required to serve cooperative driving applications, which are under development in several European projects, is estimated. The bandwidth capacity is found to depend strongly on the road configurations and the related vehicle densities. The values range between 100 kb\\/s and 3 Mb\\/s at a carrier frequency of 60 GHz. If one switches to a carrier

  7. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon’s diversity and edge density

    Microsoft Academic Search

    Habib Ramezani; Sören Holm; Anna Allard; Göran Ståhl

    2010-01-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are\\u000a used, of which Shannon’s diversity, edge length, and density are studied here. As an alternative to complete mapping, point\\u000a sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes\\u000a in Sweden (NILS). Monte-Carlo simulation was

  8. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  9. Density estimation in high and ultra high dimensions, regularization, and the L1 asymptotics

    Microsoft Academic Search

    Anirban DasGupta; S. N. Lahiri

    2012-01-01

    This article gives a theoretical treatment of the asymptotics of the L1<\\/sub> error of a model-based estimate of a density f(x|?) on a finite dimensional Euclidean space ?k<\\/sup>.\\u000a¶\\u000aThe dimension p of the parameter vector ? is considered arbitrary but fixed in Section 2. Two theorems in Section 2 lay out the weak limits of a suitably scaled L1<\\/sub>

  10. Uncertainty quantification techniques for population density estimates derived from sparse open source data

    NASA Astrophysics Data System (ADS)

    Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

    2013-05-01

    The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  11. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    SciTech Connect

    Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  12. Density

    NSDL National Science Digital Library

    Mr. Hansen

    2010-10-26

    What is density? Density is a relationship between mass (usually in grams or kilograms) and volume (usually in L, mL or cm 3 ). Below are several sights to help you further understand the concept of density. Click the following link to review the concept of density. Be sure to read each slide and watch each video: Chemistry Review: Density Watch the following video: Pop density video The following is a fun interactive sight you can use to review density. Your job is #1, to play and #2 to calculate the density of the ...

  13. Published in Monographs on Statistics and Applied Probability, London: Chapman and Hall, 1986. DENSITY ESTIMATION FOR STATISTICS AND DATA

    E-print Network

    Masci, Frank

    . DENSITY ESTIMATION FOR STATISTICS AND DATA ANALYSIS B.W. Silverman School of Mathematics University and Hodges (1951) as a way of freeing Density Estimation for Statistics and Data Analysis - B.W. SilvermanPublished in Monographs on Statistics and Applied Probability, London: Chapman and Hall, 1986

  14. Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

    PubMed

    McCabe, Patrick; Korb, Oliver; Cole, Jason

    2014-05-27

    We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions. PMID:24746022

  15. Examining the impact of the precision of address geocoding on estimated density of crime locations

    NASA Astrophysics Data System (ADS)

    Harada, Yutaka; Shimada, Takahito

    2006-10-01

    This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.

  16. Optimal diffusion MRI acquisition for fiber orientation density estimation: an analytic approach.

    PubMed

    White, Nathan S; Dale, Anders M

    2009-11-01

    An important challenge in the design of diffusion MRI experiments is how to optimize statistical efficiency, i.e., the accuracy with which parameters can be estimated from the diffusion data in a given amount of imaging time. In model-based spherical deconvolution analysis, the quantity of interest is the fiber orientation density (FOD). Here, we demonstrate how the spherical harmonics (SH) can be used to form an explicit analytic expression for the efficiency of the minimum variance (maximally efficient) linear unbiased estimator of the FOD. Using this expression, we calculate optimal b-values for maximum FOD estimation efficiency with SH expansion orders of L = 2, 4, 6, and 8 to be approximately b = 1,500, 3,000, 4,600, and 6,200 s/mm(2), respectively. However, the arrangement of diffusion directions and scanner-specific hardware limitations also play a role in determining the realizable efficiency of the FOD estimator that can be achieved in practice. We show how some commonly used methods for selecting diffusion directions are sometimes inefficient, and propose a new method for selecting diffusion directions in MRI based on maximizing the statistical efficiency. We further demonstrate how scanner-specific hardware limitations generally lead to optimal b-values that are slightly lower than the ideal b-values. In summary, the analytic expression for the statistical efficiency of the unbiased FOD estimator provides important insight into the fundamental tradeoff between angular resolution, b-value, and FOD estimation accuracy. PMID:19603409

  17. Phase-space structures - I. A comparison of 6D density estimators

    NASA Astrophysics Data System (ADS)

    Maciejewski, M.; Colombi, S.; Alard, C.; Bouchet, F.; Pichon, C.

    2009-03-01

    In the framework of particle-based Vlasov systems, this paper reviews and analyses different methods recently proposed in the literature to identify neighbours in 6D space and estimate the corresponding phase-space density. Specifically, it compares smoothed particle hydrodynamics (SPH) methods based on tree partitioning to 6D Delaunay tessellation. This comparison is carried out on statistical and dynamical realizations of single halo profiles, paying particular attention to the unknown scaling, SG, used to relate the spatial dimensions to the velocity dimensions. It is found that, in practice, the methods with local adaptive metric provide the best phase-space estimators. They make use of a Shannon entropy criterion combined with a binary tree partitioning and with subsequent SPH interpolation using 10-40 nearest neighbours. We note that the local scaling SG implemented by such methods, which enforces local isotropy of the distribution function, can vary by about one order of magnitude in different regions within the system. It presents a bimodal distribution, in which one component is dominated by the main part of the halo and the other one is dominated by the substructures of the halo. While potentially better than SPH techniques, since it yields an optimal estimate of the local softening volume (and therefore the local number of neighbours required to perform the interpolation), the Delaunay tessellation in fact generally poorly estimates the phase-space distribution function. Indeed, it requires, prior to its implementation, the choice of a global scaling SG. We propose two simple but efficient methods to estimate SG that yield a good global compromise. However, the Delaunay interpolation still remains quite sensitive to local anisotropies in the distribution. To emphasize the advantages of 6D analysis versus traditional 3D analysis, we also compare realistic 6D phase-space density estimation with the proxy proposed earlier in the literature, Q = ?/?3, where ? is the local 3D (projected) density and 3?2 is the local 3D velocity dispersion. We show that Q only corresponds to a rough approximation of the true phase-space density, and is not able to capture all the details of the distribution in phase space, ignoring, in particular, filamentation and tidal streams.

  18. Density

    NSDL National Science Digital Library

    Mrs. Petersen

    2013-10-28

    Students will explain the concept of and be able to calculate density based on given volumes and masses. Throughout today's assignment, you will need to calculate density. You can find a density calculator at this site. Make sure that you enter the correct units. For most of the problems, grams and cubic centimeters will lead you to the correct answer: Density Calculator What is Density? Visit the following website to answer questions ...

  19. Multiscale estimation of GPS velocity fields

    Microsoft Academic Search

    Carl Tape; Pablo Musé; Mark Simons; Danan Dong; Frank Webb

    2009-01-01

    We present a spherical wavelet-based multiscale approach for estimating a spatial velocity field on the sphere from a set of irregularly spaced geodetic displacement observations. Because the adopted spherical wavelets are analytically differentiable, spatial gradient tensor quantities such as dilatation rate, strain rate and rotation rate can be directly computed using the same coefficients. In a series of synthetic and

  20. IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 3, MAY 2002 497 Density Estimation and Random Variate Generation

    E-print Network

    Atiya, Amir

    process and random number generation. We present two variants of this method, a sto- chastic, the density being obtained by differentiation. In the second part of the paper, we develop new random number, we prove that the convergence to the true density for both the density estimation and random variate

  1. A New Robust Approach for Highway Traffic Density Estimation Fabio Morbidi, Luis Leon Ojeda, Carlos Canudas de Wit, Iker Bellicot

    E-print Network

    Paris-Sud XI, Université de

    A New Robust Approach for Highway Traffic Density Estimation Fabio Morbidi, Luis Le´on Ojeda for the uncertain graph-constrained Switching Mode Model (SMM), which we use to describe the highway traffic density density reconstruction via a switching observer, in an instrumented 2.2 km highway section of Grenoble

  2. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    PubMed

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ?3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

  3. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  4. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    USGS Publications Warehouse

    Boersen, M.R.; Clark, J.D.; King, T.L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  5. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping. PMID:19415517

  6. Wavelet series method for reconstruction and spectral estimation of laser Doppler velocimetry data

    NASA Astrophysics Data System (ADS)

    Jaunet, Vincent; Collin, Erwan; Bonnet, Jean-Paul

    2012-01-01

    Many techniques have been developed in order to obtain spectral density function from randomly sampled data, such as the computation of a slotted autocovariance function. Nevertheless, one may be interested in obtaining more information from laser Doppler signals than a spectral content, using more or less complex computations that can be easily conducted with an evenly sampled signal. That is the reason why reconstructing an evenly sampled signal from the original LDV data is of interest. The ability of a wavelet-based technique to reconstruct the signal with respect to statistical properties of the original one is explored, and spectral content of the reconstructed signal is given and compared with estimated spectral density function obtained through classical slotting technique. Furthermore, LDV signals taken from a screeching jet are reconstructed in order to perform spectral and bispectral analysis, showing the ability of the technique in recovering accurate information's with only few LDV samples.

  7. A maximum volume density estimator generalised over a proper motion limited sample

    E-print Network

    Lam, M C; Hambly, N C

    2015-01-01

    The traditional Schmidt density estimator has been proven to be unbiased and effective in a magnitude limited sample. Previously, efforts have been made to generalise it for populations with non-uniform density and proper motion limited cases. This work shows that the then good assumptions for a proper motion limited sample are no longer sufficient to cope with modern data. Populations with larger differences in the kinematics as compared to the Local Standard of Rest are most severely affected. We show that this systematic bias can be removed by treating the discovery fraction inseparable from the generalised maximum volume integrand. The treatment can be applied to any proper motion limited sample with good knowledge of the kinematics. This work demonstrates the method through application to a mock catalogue of a white dwarf-only solar neighbourhood for various scenarios and compared against the traditional treatment using a survey with Pan-STARRS-like characteristics.

  8. A maximum volume density estimator generalized over a proper motion-limited sample

    NASA Astrophysics Data System (ADS)

    Lam, Marco C.; Rowell, Nicholas; Hambly, Nigel C.

    2015-07-01

    The traditional Schmidt density estimator has been proven to be unbiased and effective in a magnitude-limited sample. Previously, efforts have been made to generalize it for populations with non-uniform density and proper motion-limited cases. This work shows that the then-good assumptions for a proper motion-limited sample are no longer sufficient to cope with modern data. Populations with larger differences in the kinematics as compared to the local standard of rest are most severely affected. We show that this systematic bias can be removed by treating the discovery fraction inseparable from the generalized maximum volume integrand. The treatment can be applied to any proper motion-limited sample with good knowledge of the kinematics. This work demonstrates the method through application to a mock catalogue of a white dwarf-only solar neighbourhood for various scenarios and compared against the traditional treatment using a survey with Pan-STARRS-like characteristics.

  9. Estimation of dislocation density from precession electron diffraction data using the Nye tensor.

    PubMed

    Leff, A C; Weinberger, C R; Taheri, M L

    2015-06-01

    The Nye tensor offers a means to estimate the geometrically necessary dislocation density of a crystalline sample based on measurements of the orientation changes within individual crystal grains. In this paper, the Nye tensor theory is applied to precession electron diffraction automated crystallographic orientation mapping (PED-ACOM) data acquired using a transmission electron microscope (TEM). The resulting dislocation density values are mapped in order to visualize the dislocation structures present in a quantitative manner. These density maps are compared with other related methods of approximating local strain dependencies in dislocation-based microstructural transitions from orientation data. The effect of acquisition parameters on density measurements is examined. By decreasing the step size and spot size during data acquisition, an increasing fraction of the dislocation content becomes accessible. Finally, the method described herein is applied to the measurement of dislocation emission during in situ annealing of Cu in TEM in order to demonstrate the utility of the technique for characterizing microstructural dynamics. PMID:25697461

  10. Bayesian semiparametric power spectral density estimation in gravitational wave data analysis

    E-print Network

    Edwards, Matthew C; Christensen, Nelson

    2015-01-01

    The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with non-stationary data by breaking longer data streams into smaller and locally stationary components.

  11. Bayesian semiparametric power spectral density estimation in gravitational wave data analysis

    E-print Network

    Matthew C. Edwards; Renate Meyer; Nelson Christensen

    2015-05-31

    The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with non-stationary data by breaking longer data streams into smaller and locally stationary components.

  12. Axonal and dendritic density field estimation from incomplete single-slice neuronal reconstructions

    PubMed Central

    van Pelt, Jaap; van Ooyen, Arjen; Uylings, Harry B. M.

    2014-01-01

    Neuronal information processing in cortical networks critically depends on the organization of synaptic connectivity. Synaptic connections can form when axons and dendrites come in close proximity of each other. The spatial innervation of neuronal arborizations can be described by their axonal and dendritic density fields. Recently we showed that potential locations of synapses between neurons can be estimated from their overlapping axonal and dendritic density fields. However, deriving density fields from single-slice neuronal reconstructions is hampered by incompleteness because of cut branches. Here, we describe a method for recovering the lost axonal and dendritic mass. This so-called completion method is based on an estimation of the mass inside the slice and an extrapolation to the space outside the slice, assuming axial symmetry in the mass distribution. We validated the method using a set of neurons generated with our NETMORPH simulator. The model-generated neurons were artificially sliced and subsequently recovered by the completion method. Depending on slice thickness and arbor extent, branches that have lost their outside parents (orphan branches) may occur inside the slice. Not connected anymore to the contiguous structure of the sliced neuron, orphan branches result in an underestimation of neurite mass. For 300 ?m thick slices, however, the validation showed a full recovery of dendritic and an almost full recovery of axonal mass. The completion method was applied to three experimental data sets of reconstructed rat cortical L2/3 pyramidal neurons. The results showed that in 300 ?m thick slices intracortical axons lost about 50% and dendrites about 16% of their mass. The completion method can be applied to single-slice reconstructions as long as axial symmetry can be assumed in the mass distribution. This opens up the possibility of using incomplete neuronal reconstructions from open-access data bases to determine population mean mass density fields. PMID:25009472

  13. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  14. Estimates of Leaf Vein Density Are Scale Dependent1[C][W][OPEN

    PubMed Central

    Price, Charles A.; Munro, Peter R.T.; Weitz, Joshua S.

    2014-01-01

    Leaf vein density (LVD) has garnered considerable attention of late, with numerous studies linking it to the physiology, ecology, and evolution of land plants. Despite this increased attention, little consideration has been given to the effects of measurement methods on estimation of LVD. Here, we focus on the relationship between measurement methods and estimates of LVD. We examine the dependence of LVD on magnification, field of view (FOV), and image resolution. We first show that estimates of LVD increase with increasing image magnification and resolution. We then demonstrate that estimates of LVD are higher with higher variance at small FOV, approaching asymptotic values as the FOV increases. We demonstrate that these effects arise due to three primary factors: (1) the tradeoff between FOV and magnification; (2) geometric effects of lattices at small scales; and; (3) the hierarchical nature of leaf vein networks. Our results help to explain differences in previously published studies and highlight the importance of using consistent magnification and scale, when possible, when comparing LVD and other quantitative measures of venation structure across leaves. PMID:24259686

  15. Design, implementation and comparison of two wavelet based methods for the detection of broken rotor bars in three phase induction motors

    Microsoft Academic Search

    R. Salehi Arashloo; A. Jalilian

    2010-01-01

    Preventive maintenance of induction motors has an important role in reducing expensive shutdowns due to motor faults. Motor Current Signature Analysis, MCSA, provides a simple way to evaluate the health of a machine. In this paper, two different wavelet based methods are proposed to detect broken rotor bar faults in three phase induction motors. The methods are in addition examined

  16. 100 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. XX, NO. Y, MONTH 200X Wavelet-Based Multiresolution Analysis of Irregular

    E-print Network

    Paris-Sud XI, Université de

    -Based Multiresolution Analysis of Irregular Surface Meshes S´ebastien Valette and R´emy Prost, Member, IEEE CREATIS, Lyon, France Abstract-- This paper extends Lounsbery's multiresolution analysis wavelet-based theory meshes, wavelets, multiresolution. I. INTRODUCTION MULTIRESOLUTION analysis of 3D objects is receiv- ing

  17. An assessment study of the wavelet-based index of magnetic storm activity (WISA) and its comparison to the Dst index

    E-print Network

    Kokoszka, Piotr

    of magnetic storm activity (WISA) and its comparison to the Dst index Zhonghua Xu a,Ã, Lie Zhu a , Jan Sojka: Geomagnetic indices Ring current Magnetic storms Wavelet transform a b s t r a c t A wavelet-based index of storm activity (WISA) has been recently developed [Jach, A., Kokoszka, P., Sojka, L., Zhu, L., 2006

  18. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    PubMed

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed. PMID:19586812

  19. Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

    NASA Astrophysics Data System (ADS)

    Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

    2013-12-01

    The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

  20. A Recursive Wavelet-based Strategy for Real-Time Cochlear Implant Speech Processing on PDA Platforms

    PubMed Central

    Gopalakrishna, Vanishree; Kehtarnavaz, Nasser; Loizou, Philipos C.

    2011-01-01

    This paper presents a wavelet-based speech coding strategy for cochlear implants. In addition, it describes the real-time implementation of this strategy on a PDA platform. Three wavelet packet decomposition tree structures are considered and their performance in terms of computational complexity, spectral leakage, fixed-point accuracy, and real-time processing are compared to other commonly used strategies in cochlear implants. A real-time mechanism is introduced for updating the wavelet coefficients recursively. It is shown that the proposed strategy achieves higher analysis rates than the existing strategies while being able to run in real-time on a PDA platform. In addition, it is shown that this strategy leads to a lower amount of spectral leakage. The PDA implementation is made interactive to allow users to easily manipulate the parameters involved and study their effects. PMID:20403778

  1. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  2. A wavelet-based single-view reconstruction approach for cone beam x-ray luminescence tomography imaging

    PubMed Central

    Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing

    2014-01-01

    Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315

  3. A shift-invariant wavelet-based method to quantify and identify slight power quality disturbances in power systems

    NASA Astrophysics Data System (ADS)

    Chen, Xiangxun

    2003-11-01

    Amplitude deviation (AD), frequency deviation (FD) and phase deviation (PD) are the important compositions of power quality disturbance (PQD). To analyze PQD deeply, this paper introduces a wavelet-based method for separating slight AD, FD and PD from a combined PQD, then quantifying and identifying them. The method is based on the fact that a linear-phase complex wavelet is certainly with an even real part and an odd imaginary part, or inversely. The distinctive characteristics of the method are: complex biorthogonal wavelet with the shortest smoothing filter (Haar filter), shift-invariant wavelet transform (WT) at a few scales, simple relationships between the WT coefficients and the magnitudes of AD, FD and PD, simple binary feature victor and binary-decimal conversion identifying process. These make the method simple, correct and fast.

  4. A wavelet-based single-view reconstruction approach for cone beam x-ray luminescence tomography imaging.

    PubMed

    Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing

    2014-11-01

    Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315

  5. Accuracy of estimated geometric parameters of trees depending on the LIDAR data density

    NASA Astrophysics Data System (ADS)

    Hadas, Edyta; Estornell, Javier

    2015-04-01

    The estimation of dendrometric variables has become important for spatial planning and agriculture projects. Because classical field measurements are time consuming and inefficient, airborne LiDAR (Light Detection and Ranging) measurements are successfully used in this area. Point clouds acquired for relatively large areas allows to determine the structure of forestry and agriculture areas and geometrical parameters of individual trees. In this study two LiDAR datasets with different densities were used: sparse with average density of 0.5pt/m2 and the dense with density of 4pt/m2. 25 olive trees were selected and field measurements of tree height, crown bottom height, length of crown diameters and tree position were performed. To determine the tree geometric parameters from LiDAR data, two independent strategies were developed that utilize the ArcGIS, ENVI and FUSION software. Strategy a) was based on canopy surface model (CSM) slicing at 0.5m height and in strategy b) minimum bounding polygons as tree crown area were created around detected tree centroid. The individual steps were developed to be applied also in automatic processing. To assess the performance of each strategy with both point clouds, the differences between the measured and estimated geometric parameters of trees were analyzed. As expected, the tree height were underestimated for both strategies (RMSE=0.7m for dense dataset and RMSE=1.5m for sparse) and tree crown height were overestimated (RMSE=0.4m and RMSE=0.7m for dense and sparse dataset respectively). For dense dataset, strategy b) allows to determine more accurate crown diameters (RMSE=0.5m) than strategy a) (RMSE=0.8m), and for sparse dataset, only strategy a) occurs to be relevant (RMSE=1.0m). The accuracy of strategies were also examined for their dependency on tree size. For dense dataset, the larger the tree (height or crown longer diameter), the higher was the error of estimated tree height, and for sparse dataset, the larger the tree, the higher was the error of estimated crown bottom height. Finally, the spatial distribution of points inside the tree crown was analyzed, by creating a normalized tree crown. It confirms a high concentration of LiDAR points inside the central part of a tree.

  6. Delaunay Tessellation Field Estimator analysis of the PSCz local Universe: density field and cosmic flow

    NASA Astrophysics Data System (ADS)

    Romano-Díaz, Emilio; van de Weygaert, Rien

    2007-11-01

    We apply the Delaunay Tessellation Field Estimator (DTFE) to reconstruct and analyse the matter distribution and cosmic velocity flows in the local Universe on the basis of the PSCz galaxy survey. The prime objective of this study is the production of optimal resolution 3D maps of the volume-weighted velocity and density fields throughout the nearby universe, the basis for a detailed study of the structure and dynamics of the cosmic web at each level probed by underlying galaxy sample. Fully volume-covering 3D maps of the density and (volume-weighted) velocity fields in the cosmic vicinity, out to a distance of 150h-1Mpc, are presented. Based on the Voronoi and Delaunay tessellation defined by the spatial galaxy sample, DTFE involves the estimate of density values on the basis of the volume of the related Delaunay tetrahedra and the subsequent use of the Delaunay tessellation as natural multidimensional (linear) interpolation grid for the corresponding density and velocity fields throughout the sample volume. The linearized model of the spatial galaxy distribution and the corresponding peculiar velocities of the PSCz galaxy sample, produced by Branchini et al., forms the input sample for the DTFE study. The DTFE maps reproduce the high-density supercluster regions in optimal detail, both their internal structure as well as their elongated or flattened shape. The corresponding velocity flows trace the bulk and shear flows marking the region extending from the Pisces-Perseus supercluster, via the Local Superclusters, towards the Hydra-Centaurus and the Shapley concentration. The most outstanding and unique feature of the DTFE maps is the sharply defined radial outflow regions in and around underdense voids, marking the dynamical importance of voids in the local Universe. The maximum expansion rate of voids defines a sharp cut-off in the DTFE velocity divergence probability distribution function. We found that on the basis of this cut-off DTFE manages to consistently reproduce the value of ?m ~ 0.35 underlying the linearized velocity data set.

  7. The subauroral electron density trough: Comparison between satellite observations and IRI-2007 model estimates

    NASA Astrophysics Data System (ADS)

    Xiong, C.; Lühr, H.; Ma, S. Y.

    2013-02-01

    We compare electron density predictions of the International Reference Ionosphere (IRI-2007) model with in situ measurements of the satellites CHAMP and GRACE for the years from 2005 to 2010 over the subauroral regions. The electron density between 58° and 68° Mlat are considered. The trough region Ne peaks during local summers and attain the valley during local winter. Around -100°E and 60°E, two larger electron density sectors features can be seen in both hemispheres during all three seasons, which attributed to the electron extending from middle latitude to trough region. From 2005 to the beginning of 2010, the model overestimates the trough region Ne by 20% on average and the decrease of Ne in this region can also be seen during the last solar minimum. In the southern hemisphere, the model prediction shows quite well consistence with the observation during all three seasons while the huge difference between observations and model estimation implies that the IRI-2007 model needs significant improvement to predict better the trough region in northern hemisphere.

  8. A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti

    PubMed Central

    Villela, Daniel A. M.; Codeço, Claudia T.; Figueiredo, Felipe; Garcia, Gabriela A.; Maciel-de-Freitas, Rafael; Struchiner, Claudio J.

    2015-01-01

    Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals’ refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80–1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3–5.9). The hierarchical model also performed better than the commonly used Fisher-Ford’s method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations. PMID:25906323

  9. How Does Spatial Study Design Influence Density Estimates from Spatial Capture-Recapture Models?

    PubMed Central

    Sollmann, Rahel; Gardner, Beth; Belant, Jerrold L.

    2012-01-01

    When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR) models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species. PMID:22539949

  10. Integration of Self-Organizing Map (SOM) and Kernel Density Estimation (KDE) for network intrusion detection

    NASA Astrophysics Data System (ADS)

    Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

    2009-09-01

    This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

  11. Seismic Hazard Analysis Using the Adaptive Kernel Density Estimation Technique for Chennai City

    NASA Astrophysics Data System (ADS)

    Ramanna, C. K.; Dodagoudar, G. R.

    2012-01-01

    Conventional method of probabilistic seismic hazard analysis (PSHA) using the Cornell-McGuire approach requires identification of homogeneous source zones as the first step. This criterion brings along many issues and, hence, several alternative methods to hazard estimation have come up in the last few years. Methods such as zoneless or zone-free methods, modelling of earth's crust using numerical methods with finite element analysis, have been proposed. Delineating a homogeneous source zone in regions of distributed seismicity and/or diffused seismicity is rather a difficult task. In this study, the zone-free method using the adaptive kernel technique to hazard estimation is explored for regions having distributed and diffused seismicity. Chennai city is in such a region with low to moderate seismicity so it has been used as a case study. The adaptive kernel technique is statistically superior to the fixed kernel technique primarily because the bandwidth of the kernel is varied spatially depending on the clustering or sparseness of the epicentres. Although the fixed kernel technique has proven to work well in general density estimation cases, it fails to perform in the case of multimodal and long tail distributions. In such situations, the adaptive kernel technique serves the purpose and is more relevant in earthquake engineering as the activity rate probability density surface is multimodal in nature. The peak ground acceleration (PGA) obtained from all the three approaches (i.e., the Cornell-McGuire approach, fixed kernel and adaptive kernel techniques) for 10% probability of exceedance in 50 years is around 0.087 g. The uniform hazard spectra (UHS) are also provided for different structural periods.

  12. A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti.

    PubMed

    Villela, Daniel A M; Codeço, Claudia T; Figueiredo, Felipe; Garcia, Gabriela A; Maciel-de-Freitas, Rafael; Struchiner, Claudio J

    2015-01-01

    Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals' refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80-1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3-5.9). The hierarchical model also performed better than the commonly used Fisher-Ford's method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations. PMID:25906323

  13. Comparison of breast percent density estimation from raw versus processed digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  14. SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.

    PubMed

    Cao, Yuan; He, Haibo; Man, Hong

    2012-08-01

    In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach. PMID:24807522

  15. Evaluation of the orbit altitude electron density estimation and its effect on the Abel inversion from radio occultation measurements

    Microsoft Academic Search

    Xinan Yue; William S. Schreiner; Christian Rocken; Ying-Hwa Kuo

    2011-01-01

    In this paper, the observations from CHAMP radio occultation (RO) and Planar Langmuir Probe (PLP) during 2002–2008 and Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC) observations during 2007.090–2007.120 are used to evaluate the orbit altitude electron density estimation and its effect on the Abel inversion from RO measurements. Comparison between PLP observed and RO estimated orbit electron density

  16. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  17. Estimate of density-of-states changes with strain in A15 Nb3Sn superconductors

    NASA Astrophysics Data System (ADS)

    Qiao, Li; Yang, Lin; Song, Jie

    2015-07-01

    The experimental datasets are analyzed which show that the bare density of states N(EF) changes dramatically, as does the superconducting transition temperature Tc, in Nb3Sn samples which are strained in different states and levels. By taking into account the strain induced change in the electron-phonon coupling strength, the density of states as function of strain is estimated via a formula deduced from the strong-coupling modifications to the theory of type-II superconductivity. The results of the analysis indicate that (i) as the Nb3Sn material undergoes external axial strain ?, the value of N(EF) decreases 15% as Tc varies from ?17.4 to ?16.6 K; (ii) the N(EF)-? curve exhibits a changing asymmetry of shape, in qualitative agreement with a recent first principle calculations; (iii) the relationship between the density of states and the superconducting transition temperature in strained A15 Nb3Sn strands shows significant difference between tensile and compression loads, while for the trend of the strain-induced drop in electron-phonon coupling strength versus Tc of distorted Nb3Sn sample under different stress conditions, the curves show consistency in a wide strain range. A general model for characterizing the effect of strain states on the N(EF) in A15 Nb3Sn superconductors is suggested, and the density of states behavior in different modes of deformation can be well described with the modeling formalism. The present results are useful in order to understand the origin of the strain sensitivity of the superconducting properties of A15 Nb3Sn superconductor, and develop a comprehensive theory describing the strain tensor-dependent superconducting behavior of A15 Nb3Sn strands.

  18. Adaptive observer for traffic density estimation Luis Alvarez-Icaza, Laura Munoz, Xiaotian Sun and Roberto Horowitz

    E-print Network

    Horowitz, Roberto

    technologies are being deployed in an attempt to solve them [1], [2]. Adaptive cruise control, advancedAdaptive observer for traffic density estimation Luis Alvarez-Icaza, Laura Munoz, Xiaotian Sun and Roberto Horowitz Abstract­ A traffic density scheme to be used for real- time on-ramp metering control

  19. The carbon content characteristics of tropical peats in Central Kalimantan, Indonesia: Estimating their spatial variability in density

    Microsoft Academic Search

    Sawahiko Shimada; Hidenori Takahashi; Akira Haraguchi; Masami Kaneko

    2001-01-01

    Clarification of carbon content characteristics, on their spatial variability in density, of tropical peatlands is needed for more accurate estimates of the C pools and more detailed C cycle understandings. In this study, the C density characteristics of different peatland types and at various depths within tropical peats in Central Kalimantan were analyzed. The peatland types and the land cover

  20. TreeCol: a novel approach to estimating column densities in astrophysical simulations

    NASA Astrophysics Data System (ADS)

    Clark, Paul C.; Glover, Simon C. O.; Klessen, Ralf S.

    2012-02-01

    We present TreeCol, a new and efficient tree-based scheme to calculate column densities in numerical simulations. Knowing the column density in any direction at any location in space is a prerequisite for modelling the propagation of radiation through the computational domain. TreeCol therefore forms the basis for a fast, approximate method for modelling the attenuation of radiation within large numerical simulations. It constructs a HEALPIX sphere at any desired location and accumulates the column density by walking the tree and by adding up the contributions from all tree nodes whose line of sight contributes to the pixel under consideration. In particular, when combined with widely-used tree-based gravity solvers, the new scheme requires little additional computational cost. In a simulation with N resolution elements, the computational cost of TreeCol scales as NlogN, instead of the N5/3 scaling of most other radiative transfer schemes. TreeCol is naturally adaptable to arbitrary density distributions and is easy to implement and to parallelize, particularly if a tree structure is already in place for calculating the gravitational forces. We describe our new method and its implementation into the smoothed particle hydrodynamics (SPH) code GADGET2 (although note that the scheme is not limited to particle-based fluid dynamics). We discuss its accuracy and performance characteristics for the examples of a spherical protostellar core and for the turbulent interstellar medium. We find that the column density estimates provided by TreeCol are on average accurate to better than 10 per cent. In another application, we compute the dust temperatures for solar neighbourhood conditions and compare with the result of a full-fledged Monte Carlo radiation-transfer calculation. We find that both methods give similar answers. We conclude that TreeCol provides a fast, easy to use and sufficiently accurate method of calculating column densities that comes with little additional computational cost when combined with an existing tree-based gravity solver.

  1. A New Estimate of the Star Formation Rate Density in the HDFN

    NASA Astrophysics Data System (ADS)

    Massarotti, M.; Iovino, A.

    We measured the evolution of SFRD in the HDFN by comparing the available multi-color information on galaxy SEDs with a library of model fluxes, provided by the codes of Bruzual & Charlot (1993, ApJ 405, 538) and Leitherer et al. (1999, ApJS 123, 3). For each HDFN galaxy the best fitting template was used to estimate the redshift, the amount of dust obscuration and the un-reddened UV density at 1500 Å. The results are plotted in the figure, where a realistic estimate of the errors was obtained by considering the effects of field-to-field variations (Fontana et. al., 1999, MNRAS, 310L). We did not correct for sample incompleteness, and the corrections for dust absorption in the estimates of Connolly et al. (1997, ApJ 486, 11L; C97) and Madau et. al. (1998, ApJ 498, 106; M98) were calculated according to Steidel et. al. (1999, ApJ 519, 1; S99). Our measured points show a peak at z ˜ 3, being consistent with those measured, in the same z interval, from rest-frame FIR emission (Barger et. al., 2000, AJ 119, 2092; SCUBA). We did correct for dust obscuration by estimating the reddening object by object, and not by considering a mean value of E(B -V) as in S99. Such correction does not depend linearlyon E(B -V): we did find a ratio ˜ 14 between un-reddened and reddened SFRD, ˜ 3 times greater than in S99, despite getting a mean value of color excess < E(B - V) > = 0.14 as in S99. Since we did not take into account sample incompleteness and surface brightness dimming effects, the decline of the SFRD at z ˜ 4 could be questionable.

  2. Estimation of material fluxes in an estuarine cross section: A critical analysis of spatial measurement density and errors

    Microsoft Academic Search

    Bjtirn Kjerfve; L. HAROLD STEVENSON; JEFFREY A. PROEHL; THOMAS H. CHRZANOWSKI; WILEY M. KITCHENS

    1981-01-01

    Estuarine budget studies often suffer from uncertainties of net flux estimates in view of large temporal and spatial variabilities. Optimum spatial measurement density and material flux errors for a reasonably well mixed estuary were estimated by sampling 10 stations from surface to bottom simultaneously every hour for two tidal cycles in a 320-m-wide cross section in North Inlet, South Carolina.

  3. Density

    NSDL National Science Digital Library

    Targeting a middle and high school population, this web page has an introduction to the concept of density. It is an appendix of a larger site called, MathMol (Mathematics and Molecules), designed as an introduction to molecular modeling.

  4. Extraordinarily low density of hepatitis C virus estimated by sucrose density gradient centrifugation and the polymerase chain reaction

    Microsoft Academic Search

    Hideaki Miyamoto; Hiroaki Okamoto; Koei Sato; Takeshi Tanaka; Shunji Mishiro

    1992-01-01

    The genomic RNA of hepatitis C virus (HCV) in the plasma of volunteer blood donors was detected by using the polymerase chain reaction in a fraction of density 1-08 g\\/ml from sucrose density gradient equilibrium centrifugation. When the fraction was treated with the detergent NP40 and recentrifuged in sucrose, the HCV RNA banded at 1.25g\\/ml. Assuming that NP40 removed a

  5. Density estimates of Panamanian owl monkeys (Aotus zonalis) in three habitat types.

    PubMed

    Svensson, Magdalena S; Samudio, Rafael; Bearder, Simon K; Nekaris, K Anne-Isola

    2010-02-01

    The resolution of the ambiguity surrounding the taxonomy of Aotus means data on newly classified species are urgently needed for conservation efforts. We conducted a study on the Panamanian owl monkey (Aotus zonalis) between May and July 2008 at three localities in Chagres National Park, located east of the Panama Canal, using the line transect method to quantify abundance and distribution. Vegetation surveys were also conducted to provide a baseline quantification of the three habitat types. We observed 33 individuals within 16 groups in two out of the three sites. Population density was highest in Campo Chagres with 19.7 individuals/km(2) and intermediate densities of 14.3 individuals/km(2) were observed at Cerro Azul. In la Llana A. zonalis was not found to be present. The presence of A. zonalis in Chagres National Park, albeit at seemingly low abundance, is encouraging. A longer-term study will be necessary to validate the further abundance estimates gained in this pilot study in order to make conservation policy decisions. PMID:19852005

  6. Simultaneous Estimation of Depth, Density, and Water Equivalent of Snow using a Mobile GPR Setup

    NASA Astrophysics Data System (ADS)

    Jonas, T.; Griessinger, N.; Gindraux, S.

    2014-12-01

    Terrestrial and airborne laser scanning of snow has significantly increased our ability to improve our understanding of the spatial variability of snow depth. However, methods to provide corresponding datasets of snow water equivalent of similar quality are unavailable to date. Similar to laser scan technology, ground penetration radar (GPR) has become more accessible to snow researchers and is currently successfully used in the context of snow hydrological studies. GPR systems can be used and set up in different ways to measure snow properties. In this study we elaborate on a mobile GPR system that allows simultaneous estimation of snow depth, density, water equivalent in a snow survey setting. For this purpose we have built a GPR platform around a sledge system with four antenna pairs set up as a common-mid-point array and a separate fifth antenna pair dedicated to analyze the frequency change of the radar signal when propagating through the snowpack. Liquid water content can be accounted for by assessing the frequency dependent attenuation of the radar signal. We will present data from field campaigns that were carried out in 2013 and 2014 to test the ability of our GPR system to estimate snow bulk properties along several test transects. Along with the results, we will discuss system configuration and post-processing issues.

  7. The minimum description length principle for probability density estimation by regular histograms

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Rousseau, David

    2009-09-01

    The minimum description length principle is a general methodology for statistical modeling and inference that selects the best explanation for observed data as the one allowing the shortest description of them. Application of this principle to the important task of probability density estimation by histograms was previously proposed. We review this approach and provide additional illustrative examples and an application to real-world data, with a presentation emphasizing intuition and concrete arguments. We also consider alternative ways of measuring the description lengths, that can be found to be more suited in this context. We explicitly exhibit, analyze and compare, the complete forms of the description lengths with formulas involving the information entropy and redundancy of the data, and not given elsewhere. Histogram estimation as performed here naturally extends to multidimensional data, and offers for them flexible and optimal subquantization schemes. The framework can be very useful for modeling and reduction of complexity of observed data, based on a general principle from statistical information theory, and placed within a unifying informational perspective.

  8. New Estimates on the EKB Dust Density using the Student Dust Counter

    NASA Astrophysics Data System (ADS)

    Szalay, J.; Horanyi, M.; Poppe, A. R.

    2013-12-01

    The Student Dust Counter (SDC) is an impact dust detector on board the New Horizons Mission to Pluto. SDC was designed to resolve the mass of dust grains in the range of 10^-12 < m < 10^-9 g, covering an approximate size range of 0.5-10 um in particle radius. The measurements can be directly compared to the prediction of a grain tracing trajectory model of dust originating from the Edgeworth-Kuiper Belt. SDC's results as well as data taken by the Pioneer 10 dust detector are compared to our model to derive estimates for the mass production rate and the ejecta mass distribution power law exponent. Contrary to previous studies, the assumption that all impacts are generated by grains on circular Keplerian orbits is removed, allowing for a more accurate determination of the EKB mass production rate. With these estimates, the speed and mass distribution of EKB grains entering atmospheres of outer solar system bodies can be calculated. Through December 2013, the New Horizons spacecraft reached approximately 28 AU, enabling SDC to map the dust density distribution of the solar system farther than any previous dust detector.

  9. Monte Carlo mesh tallies based on a Kernel Density Estimator approach using integrated particle tracks

    SciTech Connect

    Dunn, K. L.; Wilson, P. P. H. [Department of Engineering Physics, University of Wisconsin - Madison, 1500 Engineering Drive, Madison, WI 53706 (United States)

    2013-07-01

    A new Monte Carlo mesh tally based on a Kernel Density Estimator (KDE) approach using integrated particle tracks is presented. We first derive the KDE integral-track estimator and present a brief overview of its implementation as an alternative to the MCNP fmesh tally. To facilitate a valid quantitative comparison between these two tallies for verification purposes, there are two key issues that must be addressed. The first of these issues involves selecting a good data transfer method to convert the nodal-based KDE results into their cell-averaged equivalents (or vice versa with the cell-averaged MCNP results). The second involves choosing an appropriate resolution of the mesh, since if it is too coarse this can introduce significant errors into the reference MCNP solution. After discussing both of these issues in some detail, we present the results of a convergence analysis that shows the KDE integral-track and MCNP fmesh tallies are indeed capable of producing equivalent results for some simple 3D transport problems. In all cases considered, there was clear convergence from the KDE results to the reference MCNP results as the number of particle histories was increased. (authors)

  10. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  11. Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

    2012-12-01

    We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Wavelet-based method for the suppression of ringing on force data taken during the extensional rheology of a non-Newtonian fluid polymer

    NASA Astrophysics Data System (ADS)

    Mackey, Jeffrey R.; Salari, Ezzatollah

    2002-02-01

    This paper describes the study of wavelet-based methods employed to de-noise a force transducer signal. This signal was extracted during the extensional deformation of a non-Newtonian polymer fluid. The non-Newtonian polymeric fluid was extensionally deformed with an exponentially increasing velocity profile. This velocity profile corresponded to a specific strain rate. Since the motion was stopped quickly (deceleration time was below 50ms for a complete stop), a serious problem of ringing occurred for approximately one second after the motion has ceased. The ringing manifested itself as a damped harmonic oscillation, which overrides the relaxation characteristics of the molecular structure within the boger fluid. In this paper, our goal was to suppress the damped harmonic oscillatory signal while preserving the relaxation characteristics (decaying exponential signal) of the force data. Several wavelet-based techniques provided acceptable noise suppression while preserving the signal of interest.

  13. Noise-Resistant Wavelet-Based Bayesian Fusion of Multispectral and Hyperspectral Images

    Microsoft Academic Search

    Yifan Zhang; Steve De Backer; Paul Scheunders

    2009-01-01

    In this paper, a technique is presented for the fusion of multispectral (MS) and hyperspectral (HS) images to enhance the spatial resolution of the latter. The technique works in the wavelet domain and is based on a Bayesian estimation of the HS image, assuming a joint normal model for the images and an additive noise imaging model for the HS

  14. A WAVELET-BASED DATA IMPUTATION APPROACH TO SPECTROGRAM RECONSTRUCTION FOR ROBUST SPEECH RECOGNITION

    E-print Network

    Rose, Richard

    . A novel approach is presented for propagating prior spectro- graphic mask probabilities to serve as oracle- ing clean speech spectral components are estimated using a Bayesian framework. Spectrographic features of a spectrographic mask in a missing feature framework is to determine the spectral components that have been

  15. A wavelet-based method for multifractal image analysis. I. Methodology and test applications on isotropic and anisotropic random rough surfaces

    Microsoft Academic Search

    A. Arnéodo; N. Decoster; S. G. Roux

    2000-01-01

    :   We generalize the so-called wavelet transform modulus maxima (WTMM) method to multifractal image analysis. We show that the\\u000a implementation of this method provides very efficient numerical techniques to characterize statistically the roughness fluctuations\\u000a of fractal surfaces. We emphasize the wide range of potential applications of this wavelet-based image processing method in\\u000a fundamental as well as applied sciences. This paper

  16. Estimation of ocelot density in the pantanal using capture-recapture analysis of camera-trapping data

    USGS Publications Warehouse

    Trolle, M.; Kery, M.

    2003-01-01

    Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

  17. Estimation of forest biophysical parameters using small-footprint lidar with low density in a coniferous forest

    Microsoft Academic Search

    Qisheng He; Hanwei Xu; Youjing Zhang

    2011-01-01

    This study aimed to estimate forest stand variables, such as mean height, mean crown diameter, mean diameter breast height (DBH), basal area, tree density, and aboveground biomass in coniferous tree species of Picea crassifolia stand in the Qilian Mountain, western China using low density small-footprint airborne LiDAR data. Firstly, LiDAR points were classified into ground points and vegetation points. Then

  18. Estimation oftheConcentration of Low-Density Lipoprotein Cholesterol inPlasma, Without UseofthePreparative Ultracentrifuge

    Microsoft Academic Search

    William T. Friedewald; Robert I. Levy; Donald S. Fredrickson

    1972-01-01

    A method for estimating the cholesterol content of the serum low-density lipoprotein fraction (Sf- 0.20)is presented. The method involves measure- ments of fasting plasma total cholesterol, tri- glyceride, and high-density lipoprotein cholesterol concentrations, none of which requires the use of the preparative ultracentrifuge. Cornparison of this suggested procedure with the more direct procedure, in which the ultracentrifuge is used, yielded

  19. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  20. Automatic check method of vehicle digital dashboard based on wavelet-based multi-scale GVF snake

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-wei; Zhang, Jian-wei; Cao, Jian; Wang, Wu-lin; Gong, Jin-feng; Wang, Xu

    2007-12-01

    The high accuracy of the vehicle digital dashboard makes it difficult to check its error in real time. On taking the advantage of is production condition, the digital image processing method can be used to check the dashboard's precision automatically. The image edge detection method is the key of our dashboard check method. The snake model has been extensively used today. The GVF snake model overcomes the traditional snake model's shortcoming, it has a large capture range and is able to move into boundary concavities. But it still needs large amount of computation and is easily to be disturbed by noise. The wavelet-based multi-scale GVF snake took the advantage of the wavelet transform and GVF model. In the lower resolution, there were less wavelet coefficients and the GVF snake was easy to deform to the contour without much computation and was less interfered by noise. In higher resolution, with taking advantage of the initial position of the foregoing resolution, much more computation would be saved. Experiments show this method can detect the position of the pointer automatically and exactly.

  1. Wavelet-based image denoising using NeighShrink and BiShrink threshold functions

    Microsoft Academic Search

    P. Kittisuwan; W. Asdornwised

    2008-01-01

    This paper presents image-denoising methods performed within wavelet domain scheme by incorporating neighboring coefficients, namely NeighShrink (G.Y. Chen et al., 2004), and at the same time, denoising the image with bivariate shrinkage function. The idea of bivariate shrinkage function (BiShrink (L. Sendur and I.W. Selesnick, 2002)) is to model the signal based on MAP estimation approach. In fact, signal can

  2. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  3. How massive is Saturn's B ring? Clues from cryptic density waves

    NASA Astrophysics Data System (ADS)

    Hedman, Matthew M.; Nicholson, Philip D.

    2015-05-01

    The B ring is the brightest and most opaque of Saturn's rings, but it is also amongst the least well understood because basic parameters like its surface mass density are still poorly constrained. Elsewhere in the rings, spiral density waves driven by resonances with Saturn's various moons provide precise and robust mass density estimates, but for most the B ring extremely high opacities and strong stochastic optical depth variations obscure the signal from these wave patterns. We have developed a new wavelet-based technique that combines data from multiple stellar occultations (observed by the Visual and Infrared Mapping Spectrometer (VIMS) instrument onboard the Cassini spacecraft) that has allowed us to identify signals that may be due to waves generated by three of the strongest resonances in the central and outer B ring. These wave signatures yield new estimates of the B-ring's mass density and indicate that the B-ring's total mass could be quite low, perhaps a fraction of the mass of Saturn's moon Mimas.

  4. Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation

    PubMed Central

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of ?1.625?dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  5. Wavelet-based artifact identification and separation technique for EEG signals during galvanic vestibular stimulation.

    PubMed

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of -1.625?dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  6. An Air Traffic Prediction Model based on Kernel Density Estimation Yi Cao,1 Lingsong Zhang,2 and Dengfeng Sun3

    E-print Network

    Sun, Dengfeng

    through sophisticated flight dynamics [1]. However, for the Air Traffic Control Sys- tem Command Center at an Air Route Traffic Control Center (simply denoted as Center hereafter) level [2]. It forecasts aircraftAn Air Traffic Prediction Model based on Kernel Density Estimation Yi Cao,1 Lingsong Zhang,2

  7. Anomaly detection in sea traffic - A comparison of the Gaussian Mixture Model and the Kernel Density Estimator

    Microsoft Academic Search

    Rikard Laxhammar; G. Falkman; E. Sviestins

    2009-01-01

    This paper presents a first attempt to evaluate two previously proposed methods for statistical anomaly detection in sea traffic, namely the Gaussian mixture model (GMM) and the adaptive kernel density estimator (KDE). A novel performance measure related to anomaly detection, together with an intermediate performance measure related to normalcy modeling, are proposed and evaluated using recorded AIS data of vessel

  8. Estimating ink density from colour camera RGB values by the local kernel ridge regression

    Microsoft Academic Search

    Antanas Verikas; Marija Bacauskiene

    2008-01-01

    We present an option for CCD colour camera based ink density measurements in newspaper printing. To solve the task, first, a reflectance spectrum is reconstructed from the CCD colour camera RGB values and then a well-known relation between ink density and the reflectance spectrum of a sample being measured is used to compute the density. To achieve an acceptable spectral

  9. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    NASA Astrophysics Data System (ADS)

    Ge, Shaokui; Smith, Richard G.; Jacovides, Constantinos P.; Kramer, Marc G.; Carruthers, Raymond I.

    2011-08-01

    Plants require solar radiation for photosynthesis and their growth is directly related to the amount received, assuming that other environmental parameters are not limiting. Therefore, precise estimation of photosynthetically active radiation (PAR) is necessary to enhance overall accuracies of plant growth models. This study aimed to explore the PAR radiant flux in the San Francisco Bay Area of northern California. During the growing season (March through August) for 2 years 2007-2008, the on-site magnitudes of photosynthetic photon flux densities (PPFD) were investigated and then processed at both the hourly and daily time scales. Combined with global solar radiation ( R S) and simulated extraterrestrial solar radiation, five PAR-related values were developed, i.e., flux density-based PAR (PPFD), energy-based PAR (PARE), from-flux-to-energy conversion efficiency (fFEC), and the fraction of PAR energy in the global solar radiation (fE), and a new developed indicator—lost PARE percentages (LPR)—when solar radiation penetrates from the extraterrestrial system to the ground. These PAR-related values indicated significant diurnal variation, high values occurring at midday, with the low values occurring in the morning and afternoon hours. During the entire experimental season, the overall mean hourly value of fFEC was found to be 2.17 ?mol J-1, while the respective fE value was 0.49. The monthly averages of hourly fFEC and fE at the solar noon time ranged from 2.15 in March to 2.39 ?mol J-1 in August and from 0.47 in March to 0.52 in July, respectively. However, the monthly average daily values were relatively constant, and they exhibited a weak seasonal variation, ranging from 2.02 mol MJ-1 and 0.45 (March) to 2.19 mol MJ-1 and 0.48 (June). The mean daily values of fFEC and fE at the solar noon were 2.16 mol MJ-1 and 0.47 across the entire growing season, respectively. Both PPFD and the ever first reported LPR showed strong diurnal patterns. However, they had opposite trends. PPFD was high around noon, resulting in low values of LPR during the same time period. Both were found to be highly correlated with global solar radiation R S, solar elevation angle h, and the clearness index K t. Using the best subset selection of variables, two parametric models were developed for estimating PPFD and LPR, which can easily be applied in radiometric sites, by recording only global solar radiation measurements. These two models were found to be involved with the most commonly measured global solar radiation ( R S) and two large-scale geometric parameters, i.e., extraterrestrial solar radiation and solar elevation. The models were therefore insensitive to local weather conditions such as temperature. In particular, with two test data sets collected in USA and Greece, it was verified that the models could be extended across different geographical areas, where they performed well. Therefore, these two hourly based models can be used to provide precise PAR-related values, such as those required for developing precise vegetation growth models.

  10. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  11. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images.

  12. Density functional estimations of Heisenberg exchange constants in oligonuclear magnetic compounds: Assessment of density functional theory versus ab initio

    NASA Astrophysics Data System (ADS)

    Zein, Samir; Poor Kalhor, Mahboubeh; Chibotaru, Liviu F.; Chermette, Henry

    2009-12-01

    Modern density functionals were assessed for the calculation of magnetic exchange constants of academic hydrogen oligomer systems. Full-configuration interaction magnetic exchange constants and wavefunctions are taken as references for several Hn model systems with different geometrical distributions from Ciofini et al. [Chem. Phys. 309, 133 (2005)]. Regression analyses indicate that hybrid functionals (B3LYP, O3LYP, and PBE0) rank among the best ones with a slope of typically 0.5, i.e., 100% overestimation with a standard error of about 50 cm-1. The efficiency of the highly ranked functionals for predicting the correct "exact states" (after diagonalization of the Heisenberg Hamiltonian) is validated, and a statistical standard error is assigned for each functional. The singular value decomposition approach is used for treating the overdetermination of the system of equations when the number of magnetic centers is greater than 3. Further discussions particularly about the fortuitous success of the Becke00-x-only functional for treating hydrogenic models are presented.

  13. [Estimation of age-related features of acoustic density and biometric relations of lens based on combined ultrasound scanning].

    PubMed

    Avetisov, K S; Markosian, A G

    2013-01-01

    Results of combined ultrasound scanning for estimation of acoustic lens density and biometric relations of lens and other eye structures are presented. A group of 124 patients (189 eyes) was studied; they were subdivided depending on age and length of anteroposterior axis of the eye. Examination algorithm was developed that allows selective estimation of acoustic density of different lens zones and biometric measurements including volumetric. Age-related increase of acoustic density of different lens zones was revealed that indirectly shows method efficiency. Biometric studies showed almost concurring volumetric lens measurements in "normal" and "short" eyes in spite of significantly thicker central zone of the latter. Significantly lower correlation between anterior chamber volume and width of its angle was revealed in "short" eyes and "normal" and "long" eyes (correlation coefficients 0.37, 0.68 and 0.63 respectively). PMID:23879017

  14. Effect of the distribution and density of benthic target organisms on manta tow estimates of their abundance

    NASA Astrophysics Data System (ADS)

    Fernandes, L.

    1990-12-01

    The perception biases associated with manta tow estimates of the abundance of benthic organisms were investigated using artificial targets, the density, distribution and availability of which could be controlled. The proportion of targets which are counted by a mantatowed observer (sightability) and the precision of the resultant estimates of their abundance decreased as the targets were distributed more widely over a reef slope. The sightability of targets was enhanced when they were arranged at a high density or in relatively large groups of 9 11, or when they were located directly under the manta towed observer rather than at the edges of his visual field. Limiting the search width of a manta towed observer to about 9 m should improve manta tow estimates of target organisms. However, in practice this would be difficult for reef organisms such as Acanthaster planci because of the extreme three dimensionality of the reef surface relative to the depth of the water.

  15. Estimation of refractive index and density of lubricants under high pressure by Brillouin scattering

    NASA Astrophysics Data System (ADS)

    Nakamura, Y.; Fujishiro, I.; Kawakami, H.

    1994-07-01

    Employing a diamond-anvil cell, Brillouin scattering spectra of 90° and 180° angles for synthetic lubricants (paraffinic and naphthenic oils) were measured and sound velocity, density, and refractive index under high pressure were obtained. The density obtained from the thermodynamic relation was compared with that from Lorentz-Lorentz's formula. The density was also compared with Dowson's density-pressure equation of lubricants, and density-pressure characteristics of the paraffinic oil and naphthenic oil were described considering the molecular structure for solidified lubricants. The effect of such physical properties of lubricants on the elastohydrodynamic lubrication of ball bearings, gears and traction drives was considered.

  16. Wavelet-based statistical approach for speckle reduction in medical ultrasound images.

    PubMed

    Gupta, S; Chauhan, R C; Sexana, S C

    2004-03-01

    A novel speckle-reduction method is introduced, based on soft thresholding of the wavelet coefficients of a logarithmically transformed medical ultrasound image. The method is based on the generalised Gaussian distributed (GGD) modelling of sub-band coefficients. The method used was a variant of the recently published BayesShrink method by Chang and Vetterli, derived in the Bayesian framework for denoising natural images. It was scale adaptive, because the parameters required for estimating the threshold depend on scale and sub-band data. The threshold was computed by Ksigma2/sigma(x), where sigma and sigma(x) were the standard deviation of the noise and the sub-band data of the noise-free image, respectively, and K was a scale parameter. Experimental results showed that the proposed method outperformed the median filter and the homomorphic Wiener filter by 29% in terms of the coefficient of correlation and 4% in terms of the edge preservation parameter. The numerical values of these quantitative parameters indicated the good feature preservation performance of the algorithm, as desired for better diagnosis in medical image processing. PMID:15125148

  17. A wavelet-based spatially adaptive method for mammographic contrast enhancement.

    PubMed

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2003-03-21

    A method aimed at minimizing image noise while optimizing contrast of image features is presented. The method is generic and it is based on local modification of multiscale gradient magnitude values provided by the redundant dyadic wavelet transform. Denoising is accomplished by a spatially adaptive thresholding strategy, taking into account local signal and noise standard deviation. Noise standard deviation is estimated from the background of the mammogram. Contrast enhancement is accomplished by applying a local linear mapping operator on denoised wavelet magnitude values. The operator normalizes local gradient magnitude maxima to the global maximum of the first scale magnitude subimage. Coefficient mapping is controlled by a local gain limit parameter. The processed image is derived by reconstruction from the modified wavelet coefficients. The method is demonstrated with a simulated image with added Gaussian noise, while an initial quantitative performance evaluation using 22 images from the DDSM database was performed. Enhancement was applied globally to each mammogram, using the same local gain limit value. Quantitative contrast and noise metrics were used to evaluate the quality of processed image regions containing verified lesions. Results suggest that the method offers significantly improved performance over conventional and previously reported global wavelet contrast enhancement methods. The average contrast improvement, noise amplification and contrast-to-noise ratio improvement indices were measured as 9.04, 4.86 and 3.04, respectively. In addition, in a pilot preference study, the proposed method demonstrated the highest ranking, among the methods compared. The method was implemented in C++ and integrated into a medical image visualization tool. PMID:12699195

  18. Correlating defect density with carrier mobility in large-scaled graphene films: Raman spectral signatures for the estimation of defect density.

    PubMed

    Hwang, Jeong-Yuan; Kuo, Chun-Chiang; Chen, Li-Chyong; Chen, Kuei-Hsien

    2010-11-19

    We report a correlation between carrier mobility and defect density in large-scaled graphene films prepared by chemical vapor deposition (CVD). Raman spectroscopy is used for investigating the layer number and the crystal quality of graphene films, and the defect density is estimated by the intensity ratios of the D and G peaks. By carefully controlling the growth parameters, especially the H(2)/CH(4) ratios during growth, and employing H(2) during cooling, monolayer-dominant graphene films can be obtained with different D peak intensities in Raman spectra, which show good correspondence with their carrier mobility obtained by Hall measurements. Also, a progressive shift of neutrality points to a more negative gate voltage is observed with the increase in defect density. Both the connections of carrier mobility and the shift of neutrality points to a negative direction in relation to the defect density in graphene are observed for the first time in CVD-grown graphene films. With the best growth conditions, a cm-scaled graphene film with carrier mobility of ? 1350 cm(2) V(-1) s(-1) (p-type in air) can be obtained. PMID:20972312

  19. Correlating defect density with carrier mobility in large-scaled graphene films: Raman spectral signatures for the estimation of defect density

    NASA Astrophysics Data System (ADS)

    Hwang, Jeong-Yuan; Kuo, Chun-Chiang; Chen, Li-Chyong; Chen, Kuei-Hsien

    2010-11-01

    We report a correlation between carrier mobility and defect density in large-scaled graphene films prepared by chemical vapor deposition (CVD). Raman spectroscopy is used for investigating the layer number and the crystal quality of graphene films, and the defect density is estimated by the intensity ratios of the D and G peaks. By carefully controlling the growth parameters, especially the H2/CH4 ratios during growth, and employing H2 during cooling, monolayer-dominant graphene films can be obtained with different D peak intensities in Raman spectra, which show good correspondence with their carrier mobility obtained by Hall measurements. Also, a progressive shift of neutrality points to a more negative gate voltage is observed with the increase in defect density. Both the connections of carrier mobility and the shift of neutrality points to a negative direction in relation to the defect density in graphene are observed for the first time in CVD-grown graphene films. With the best growth conditions, a cm-scaled graphene film with carrier mobility of ~ 1350 cm2 V - 1 s - 1 (p-type in air) can be obtained.

  20. A field comparison of nested grid and trapping web density estimators

    USGS Publications Warehouse

    Jett, D.A.; Nichols, J.D.

    1987-01-01

    The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.

  1. Estimates of volumetric bone density from projectional measurements improve the discriminatory capability of dual X-ray absorptiometry

    NASA Technical Reports Server (NTRS)

    Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

    1995-01-01

    To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.

  2. Multiscale Estimation of GPS Velocity Fields

    Microsoft Academic Search

    P. Muse; C. Tape; M. Simons

    2008-01-01

    We present a spherical-wavelet-based multiscale representation for three-component velocities on the earth's surface, as a tool to facilitate analysis of geodetic observations in dense GPS networks. We design an efficient inverse problem to determine a set of wavelet coefficients that describe the irregularly distributed observations. Once the velocity field is estimated, we readily compute spatial derivative quantities, such as the

  3. Signal to noise ratio scaling and density limit estimates in longitudinal magnetic recording

    Microsoft Academic Search

    H. N. Bertram; H. Zhou; R. Gustafson

    1998-01-01

    A simplified general expression is given for SNR for digital magnetic recording for transition noise dominant systems. High density media are assumed in which the transition parameter scales with the in-plane grain diameter. At a fixed normalized code density, the SNR varies as the square of the bit spacing times the read track width divided by the grain diameter cubed.

  4. HIGH DENSITY AIRBORNE LIDAR ESTIMATION OF DISRUPTED TREES INDUCED BY LANDSLIDES

    Microsoft Academic Search

    Khamarrul Azahari Razak; Alexander Bucksch; Menno Straatsma; Cees J. Van Westen; Rabieahtul Abu Bakar; Steven M. de Jong

    2013-01-01

    Airborne laser scanning (ALS) data has revolutionized the landslide assessment in a rugged vegetated terrain. It enables the parameterization of morphology and vegetation of the instability slopes. Vegetation characteristics are by far less investigated because of the currently available accuracy and density ALS data and paucity of field data validation. We utilized a high density ALS (HDALS) data with 170

  5. Predicting fluctuations of reintroduced ibex populations: the importance of density dependence, environmental stochasticity and uncertain population estimates.

    PubMed

    Saether, Bernt-Erik; Lillegård, Magnar; Grøtan, Vidar; Filli, Flurin; Engen, Steinar

    2007-03-01

    1. Development of population projections requires estimates of observation error, parameters characterizing expected dynamics such as the specific population growth rate and the form of density regulation, the influence of stochastic factors on population dynamics, and quantification of the uncertainty in the parameter estimates. 2. Here we construct a Population Prediction Interval (PPI) based on Bayesian state space modelling of future population growth of 28 reintroduced ibex populations in Switzerland that have been censused for up to 68 years. Our aim is to examine whether the interpopulation variation in the precision of the population projections is related to differences in the parameters characterizing the expected dynamics, in the effects of environmental stochasticity, in the magnitude of uncertainty in the population parameters, or in the observation error. 3. The error in the population censuses was small. The median coefficient of variation in the estimates across populations was 5.1%. 4. Significant density regulation was present in 53.6% of the populations, but was in general weak. 5. The width of the PPI calculated for a period of 5 years showed large variation among populations, and was explained by differences in the impact of environmental stochasticity on population dynamics. 6. In spite of the high accuracy in population estimates, the uncertainty in the parameter estimates was still large. This uncertainty affected the precision in the population predictions, but it decreased with increasing length of study period, mainly due to higher precision in the estimates of the environmental variance in the longer time-series. 7. These analyses reveal that predictions of future population fluctuations of weakly density-regulated populations such as the ibex often become uncertain. Credible population predictions require that this uncertainty is properly quantified. PMID:17302840

  6. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  7. Application of multiresolution analyses to electron density maps of small molecules: Critical point representations for molecular superposition

    Microsoft Academic Search

    Laurence Leherte

    2001-01-01

    Three different methods are applied to generate low resolution molecular electron density (ED) distribution functions: a crystallography-based formalism, an analytical approach which allows the calculation of a promolecular ED distribution in terms of a weighted summation over atomic ED distributions, and a wavelet-based multiresolution analysis approach. Critical point graph representations of the molecular ED distributions are then generated by locating

  8. Population Indices Versus Correlated Density Estimates of Black-Footed Ferret Abundance

    Microsoft Academic Search

    Martin B. Grenier; Steven W. Buskirk; Richard Anderson-Sprecher

    2009-01-01

    Estimating abundance of carnivore populations is problematic because individuals typically are elusive, nocturnal, and dispersed across the landscape. Rare or endangered carnivore populations are even more difficult to estimate because of small sample sizes. Considering behavioral ecology of the target species can drastically improve survey efficiency and effectiveness. Previously, abundance of the black-footed ferret (Mustela nigripes) was monitored by spotlighting

  9. Crowd flow estimation using multiple visual features for scenes with changing crowd densities

    Microsoft Academic Search

    Satyam Srivastava; Ka Ki Ng; Edward J. Delp

    2011-01-01

    Crowd estimation and monitoring is an important surveillance task. We address the problem of estimating the “flow,” that is the number of persons passing a designated region in a unit time. We designate an area of the scene as a virtual trip wire and accumulate the total number of foreground pixels (in the trip wire) over a chosen time period.

  10. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew [Temasek Laboratories, National University of Singapore, Singapore 117508 (Singapore); Wang Qingguo [Depa. Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (Singapore)

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  11. Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range

    NASA Technical Reports Server (NTRS)

    Krupenio, N. N.

    1977-01-01

    The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.

  12. Estimating wild boar ( Sus scrofa ) abundance and density using capture–resights in Canton of Geneva, Switzerland

    Microsoft Academic Search

    C. Hebeisen; J. Fattebert; E. Baubet; C. Fischer

    2008-01-01

    We estimated wild boar abundance and density using capture–resight methods in the western part of the Canton of Geneva (Switzerland)\\u000a in the early summer from 2004 to 2006. Ear-tag numbers and transmitter frequencies enabled us to identify individuals during\\u000a each of the counting sessions. We used resights generated by self-triggered camera traps as recaptures. Program Noremark provided\\u000a Minta–Mangel and Bowden’s

  13. Some Interesting Facts about Correlation Between Gravity Anomalies and Heights with Implications Towards the Correction Density Estimation

    NASA Astrophysics Data System (ADS)

    Mikuška, J.; Marušiak, I.; Zahorec, P.; Pap?o, J.; Pasteka, R.; Bielik, M.

    2014-12-01

    It has been well known that free-air anomalies and gravitational effects of the topographic masses are mutually proportional, at least in general. However, it is rather intriguing that this feature is more remarkable in elevated mountainous areas than in lowlands or flat regions, as we demonstrate on practical examples. Further, since the times of Pierre Bouguer we know that gravitational effect of the topographic masses is station-height-dependent. In our presentation we show that the respective contributions to this height dependence, although they are nonzero, are less significant in the cases of both the nearest masses and the more remote ones while the contribution of the masses within hundreds and thousands of meters from the gravity station is dominant. We also illustrate that, surprisingly, gravitational effects of the non-near topographic masses can be apparently independent on their respective volumes, while their gravitational effects are still well proportional to the gravity station heights. On the other hand, based on interpretational reasons, Bouguer anomaly should not correlate very much with the heights of the measuring points or, more specifically, with the gravitational effect of the topographic masses. Standard practice is to estimate a suitable (uniform) reduction or correction density within the study area in order to minimize such an undesired correlation and, vice versa, the minimum correlation is often utilized as a criteria for estimating such density. Our main objective is to point out, from the aspect of the correction density estimations, that the contributions of the topographic masses should be viewed alternatively, depending on the particular distances of the respective portions of those masses from the gravity station. We have tested majority of the existing methods of such density estimation and developed a new one which takes the facts mentioned above into consideration. This work was supported by the Slovak Research and Development Agency under the contracts APVV-0827-12 and APVV-0194-10.

  14. Estimation of density and population size and recommendations for monitoring trends of Bahama parrots on Great Abaco and Great Inagua

    USGS Publications Warehouse

    Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.

    2005-01-01

    Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.

  15. Density estimation and survey validation for swift fox Vulpes velox in Oklahoma

    Microsoft Academic Search

    Marc A. Criffield; Eric C. Hellgren; David M. LESLIE Jr

    2010-01-01

    The swift fox Vulpes velox Say, 1823, a small canid native to shortgrass prairie ecosystems of North America, has been the subject of enhanced conservation\\u000a and research interest because of restricted distribution and low densities. Previous studies have described distributions\\u000a of the species in the southern Great Plains, but data on density are required to evaluate indices of relative abundance

  16. TreeCol: a novel approach to estimating column densities in astrophysical simulations

    Microsoft Academic Search

    Paul C. Clark; Simon C. O. Glover; Ralf S. Klessen

    2011-01-01

    We present TreeCol, a new and efficient tree-based scheme to calculate column densities in numerical simulations. Knowing the column density in any direction at any location in space is a prerequisite for modelling the propagation of radiation through the computational domain. TreeCol therefore forms the basis for a fast, approximate method for modelling the attenuation of radiation within large numerical

  17. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    PubMed

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368

  18. Diagnosis of Broken Bar Fault in Induction Machines Using Discrete Wavelet Transform without Slip Estimation

    Microsoft Academic Search

    Shahin Hedayati Kia; Humbero Henao; S. G.-A. Capolino

    2007-01-01

    The aim of this paper is to present a wavelet-based method for broken bar fault detection in induction machines. The frequency-domain methods which are commonly used need speed information or accurate slip estimation for frequency components localization in any spectrum. Nevertheless, the fault frequency bandwidth can be well defined for any induction machine due to numerous previous investigations. The proposed

  19. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error 1

    PubMed Central

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2011-01-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

  20. Bayesian wavelet-based image deconvolution: a GEM algorithm exploiting a class of heavy-tailed priors.

    PubMed

    Bioucas-Dias, José M

    2006-04-01

    Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys, and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(N log N) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity. PMID:16579380

  1. A wavelet-based multi-scale spatiotemporal filtering approach for monitoring Earth surface deformation using space geodesy

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Lundgren, P.; Rosen, P. A.; Agram, P.

    2013-12-01

    Accurate imaging of deformation processes in plate boundary zones at various space-time scales is crucial to advancing our knowledge of plate boundary tectonics and volcano dynamics. Space-borne geodetic measurements such as interferometric synthetic aperture radar (InSAR) and continuous GPS (CGPS) provide complementary measurements of surface deformation. InSAR provides the line-of-sight measurements that are spatially dense but temporally coarse while point-based GPS measurements provide 3-D displacement components at sub-daily to daily temporal interval but are limited when trying to resolve fine-scale deformation processes depending on station distribution and spacing. The large volume of SAR data from existing satellite platforms and future SAR missions and GPS time series from large-scale CGPS networks (e.g, Earthscope/PBO) call for efficient approaches to integrate these two data for maximal extraction of the signal of interest and imaging time-variable deformation processes. We present a wavelet based spatiotemporal filtering approach to integrate InSAR and GPS data at multi-scale level in space and time. The approach consists of a series of InSAR noise correction modules that are based on wavelet multi-resolution analysis (MRA) for correcting major noise components in InSAR images and the InSAR time series analysis that combines MRA and small baseline least-squares inversion with temporal filtering (wavelet or Kalman filter based) to filter out turbulent troposphere noise. It also exploits a novel way that considers temporal correlation between InSAR and GPS time series at a multi-scale level and reconstruct surface deformation measurements in dense spatial and temporal sampling. Compared to other approaches, this approach does not require a priori parameterization of temporal behaviors and provides a general way to discover signals of interest at different spatiotemporal scales. We present test cases where known signals with realistic noise components are synthesized for analysis and comparison. We are in the process of improving the approach and generalizing it to real-world applications.

  2. Pattern recognition algorithms for density estimation of asphalt pavement during compaction: a simulation study

    NASA Astrophysics Data System (ADS)

    Shangguan, Pengcheng; Al-Qadi, Imad L.; Lahouar, Samer

    2014-08-01

    This paper presents the application of artificial neural network (ANN) based pattern recognition to extract the density information of asphalt pavement from simulated ground penetrating radar (GPR) signals. This study is part of research efforts into the application of GPR to monitor asphalt pavement density during compaction. The main challenge is to eliminate the effect of roller-sprayed water on GPR signals during compaction and to extract density information accurately. A calibration of the excitation function was conducted to provide an accurate match between the simulated signal and the real signal. A modified electromagnetic mixing model was then used to calculate the dielectric constant of asphalt mixture with water. A large database of GPR responses was generated from pavement models having different air void contents and various surface moisture contents using finite-difference time-domain simulation. Feature extraction was performed to extract density-related features from the simulated GPR responses. Air void contents were divided into five classes representing different compaction statuses. An ANN-based pattern recognition system was trained using the extracted features as inputs and air void content classes as target outputs. Accuracy of the system was tested using test data set. Classification of air void contents using the developed algorithm is found to be highly accurate, which indicates effectiveness of this method to predict asphalt concrete density.

  3. Estimation of Neutral Density in Edge Plasma with Double Null Configuration in EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Xu, Guosheng; Ding, Siye; Gao, Wei; Wu, Zhenwei; Chen, Yingjie; Huang, Juan; Liu, Xiaoju; Zang, Qing; Chang, Jiafeng; Zhang, Wei; Li, Yingying; Qian, Jinping

    2011-08-01

    In this work, population coefficients of hydrogen's n = 3 excited state from the hydrogen collisional-radiative (CR) model, from the data file of DEGAS 2, are used to calculate the photon emissivity coefficients (PECs) of hydrogen Balmer-? (n = 3 ? n = 2) (H?). The results are compared with the PECs from Atomic Data and Analysis Structure (ADAS) database, and a good agreement is found. A magnetic surface-averaged neutral density profile of typical double-null (DN) plasma in EAST is obtained by using FRANTIC, the 1.5-D fluid transport code. It is found that the sum of integral D? and H? emission intensity calculated via the neutral density agrees with the measured results obtained by using the absolutely calibrated multi-channel poloidal photodiode array systems viewing the lower divertor at the last closed flux surface (LCFS). It is revealed that the typical magnetic surface-averaged neutral density at LCFS is about 3.5 × 1016 m-3.

  4. Fourier and Wavelet Based Characterisation of the Ionospheric Response to the Solar Eclipse of August, the 11th, 1999, Measured Through 1-minute Vertical Ionospheric Sounding

    NASA Astrophysics Data System (ADS)

    Sauli, P.; Abry, P.; Boska, J.

    2004-05-01

    The aim of the present work is to study the ionospheric response induced by the solar eclipse of August, the 11th, 1999. We provide Fourier and wavelet based characterisations of the propagation of the acoustic-gravity waves induced by the solar eclipse. The analysed data consist of profiles of electron concentration. They are derived from 1-minute vertical incidence ionospheric sounding measurements, performed at the Pruhonice observatory (Czech republic, 49.9N, 14.5E). The chosen 1-minute high sampling rate aims at enabling us to specifically see modes below acoustic cut-off period. The August period was characterized by Solar Flux F10.7 = 128, steady solar wind, quiet magnetospheric conditions, a low geomagnetic activity (Dst index varies from -10 nT to -20 nT, ? Kp index reached value of 12+). The eclipse was notably exceptional in uniform solar disk. These conditions and fact that the culmination of the solar eclipse over central Europe occurred at local noon are such that the observed ionospheric response is mainly that of the solar eclipse. We provide a full characterization of the propagation of the waves in terms of times of occurrence, group and phase velocities, propagation direction, characteristic period and lifetime of the particular wave structure. However, ionospheric vertical sounding technique enables us to deal with vertical components of each characteristic. Parameters are estimated combining Fourier and wavelet analysis. Our conclusions confirm earlier theoretical and experimental findings, reported in [Altadill et al., 2001; Farges et al., 2001; Muller-Wodarg et al.,1998] regarding the generation and propagation of gravity waves and provide complementary characterisation using wavelet approaches. We also report a new evidence for the generation and propagation of acoustic waves induced by the solar eclipse through the ionospheric F region. Up to our knowledge, this is the first time that acoustic waves can be demonstrated based on ionospheric measurements and analysis. We report similarities in generation and occurrence of acoustic and gravity modes in the eclipsed region. Our analysis techniques enable us to "locate" wave bursts in particular height of ionosphere, specify source region and give characteristics of acoustic and gravity wave movement through ionosphere. Altadill D., J.G. Sole, E.M. Apostolov: Vertical structure of a gravity wave like oscillation in the ionosphere generated by the solar eclipse of August 11, 1999, J. Geoph. Res.-Space Phys., 106 (A10), 21419-21428, 2001. Farges T., J.C. Jodogne, R. Bamford, Y. Le Roux, F. Gauthier, P.M. Villa, D.Altadill, J.G. Sole, G.Miro: Disturbances of the western European ionosphere during the the total solar eclipse of 11 August 1999 measured by wide ionosonde and radar network, J. Atmosph. Solar-Terr. Phys., 63 (9), 915-924, 2001. Muller-Wodarg I.C.F, A.D. Aylward, M. Lockwood: Effects of a Mid-Latitude Solar Eclipse on the Thermosphere and Ionosphere - A Modelling Study, Geoph. Res. Letters, 25 (20), 3787-3790, 1998.

  5. Improvement of ionospheric electron density estimation with GPSMET occultations using Abel inversion and VTEC information

    Microsoft Academic Search

    M. Garcia-Fernandez; M. Hernandez-Pajares; M. Juan; J. Sanz

    2003-01-01

    As it is known, the Abel transform has been proven to be a useful tool that offers the possibility to obtain vertical description of ionospheric electron density (among other neutral atmosphere parameters) through inversion of Global Positioning System (GPS) observations gathered by Low Earth Orbiters (LEO). This work is focused on extending the application of this technique to GPSMET data

  6. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...

  7. High-dimensional probability density estimation with randomized ensembles of tree structured Bayesian networks

    E-print Network

    Wehenkel, Louis

    } be a finite set of dis- crete random variables, and D = (x1, · · · , xd) be a data-set (sample) of joint networks aims at model- ing the joint density of a set of random variables from a random sample of joint the data-set. In this framework, ensembles of weakly fitted randomized models have been studied intensively

  8. Solar flux estimated from electron density and ion composition measurements in the lower thermosphere

    Microsoft Academic Search

    P. Chakrabarty; A. K. Saha; D. K. Chakrabarty

    1977-01-01

    Appropriate models of solar flux in X-rays and extreme ultraviolet (EUV) bands are presented in the light of the current status of ion chemistry in the region from 90 to 130 km and of reliable measurements of reaction rates, electron density, and ion composition. It is found that the EUV flux of Schmidtke (1976) and the X-ray flux of Manson

  9. PELLET COUNT INDICES COMPARED TO MARK-RECAPTURE ESTIMATES FOR EVALUATING SNOWSHOE HARE DENSITY

    Microsoft Academic Search

    L. SCOTT MILLS; KAREN E. HODGES

    Snowshoe hares (Lepus americanus) undergo remarkable cycles and are the primary prey base of Canada lynx (Lynx canadensis), a carnivore recently listed as threatened in the contiguous United States. Efforts to evalu- ate hare densities using pellets have traditionally been based on regression equations developed in the Yukon, Canada. In western Montana, we evaluated whether or not local regression equations

  10. TreeCol: a novel approach to estimating column densities in astrophysical simulations

    E-print Network

    Clark, Paul C; Klessen, Ralf S

    2011-01-01

    We present TreeCol, a new and efficient tree-based scheme to calculate column densities in numerical simulations. Knowing the column density in any direction at any location in space is a prerequisite for modelling the propagation of radiation through the computational domain. TreeCol therefore forms the basis for a fast, approximate method for modelling the attenuation of radiation within large numerical simulations. It constructs a HEALPix sphere at any desired location and accumulates the column density by walking the tree and by adding up the contributions from all tree nodes whose line of sight contributes to the pixel under consideration. In particular when combined with widely-used tree-based gravity solvers the new scheme requires little additional computational cost. In a simulation with $N$ resolution elements, the computational cost of TreeCol scales as $N \\log N$, instead of the $N^{5/3}$ scaling of most other radiative transfer schemes. TreeCol is naturally adaptable to arbitrary density distribu...

  11. THE POTENTIAL OF DISCRETE RETURN, SMALL FOOTPRINT AIRBORNE LASER SCANNING DATA FOR VEGETATION DENSITY ESTIMATION

    Microsoft Academic Search

    Felix Morsdorf; Benjamin Koetz; Erich Meier; K. I. Itten; Britta Allg

    2005-01-01

    We evaluate the potential of deriving a vegetation leaf area index (LAI) from small footprint airborne laser scanning data. Based on findings from large area histograms of discrete laser returns for two contrasting plots, LAI is estimated from the fraction of first to last and single returns inside the canopy. The canopy returns are classified using thresholding of LIDAR raw

  12. Band gap estimation for a triaminotrinitrobenzene molecular crystal by the density-functional method

    Microsoft Academic Search

    K. F. Grebenkin; A. L. Kutepov

    2000-01-01

    Numerical estimations showed the crystalline triaminotrinitrobenzene explosive to be a wide-gap semiconductor with a band\\u000a gap of 2–4 eV under normal conditions and 1.5–2.0 eV under pressures of 10–20 GPa, which is typical of shock-wave-initiated\\u000a detonation.

  13. SAR amplitude probability density function estimation based on a generalized Gaussian model

    Microsoft Academic Search

    Gabriele Moser; Josiane Zerubia; Sebastiano B. Serpico

    2006-01-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude

  14. Estimation of forest biophysical parameters using small-footprint lidar with low density in a coniferous forest

    NASA Astrophysics Data System (ADS)

    He, Qisheng; Xu, Hanwei; Zhang, Youjing

    2011-10-01

    This study aimed to estimate forest stand variables, such as mean height, mean crown diameter, mean diameter breast height (DBH), basal area, tree density, and aboveground biomass in coniferous tree species of Picea crassifolia stand in the Qilian Mountain, western China using low density small-footprint airborne LiDAR data. Firstly, LiDAR points were classified into ground points and vegetation points. Then the statistics of vegetation points, including height quantiles, mean height, and fractional cover was calculated. The stepwise multiple regression models were used to develop the equations relating the statistics of vegetation points to field inventory data and field-based estimates of biomass for each sample plot. The result shows that the mean height, biomass and basal area have a higher accuracy with R2 of 0.830, 0.736 and 0.657, respectively, while the mean diameter breast height DBH, crown diameter and tree density have a lower accuracy with R2 of 0.491, 0.356 and 0.403, respectively. Finally, the spatial forest stand variable maps were established using the stepwise multiple regression equations. These maps were very useful for updating and modifying forest base maps and forest register.

  15. Estimates of maximum energy density of cosmological gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Giblin, John T.; Thrane, Eric

    2014-11-01

    The recent claim by BICEP2 of evidence for primordial gravitational waves has focused interest on the potential for early-Universe cosmology using gravitational waves. In addition to cosmic microwave background detectors, efforts are underway to carry out gravitational-wave astronomy with pulsar timing arrays, space-based detectors, and terrestrial detectors. These efforts will probe a wide range of times in the early Universe, during which backgrounds may have been produced through processes such as phase transitions or preheating. We derive a rule of thumb (not so strong as an upper limit) governing the maximum energy density of cosmological backgrounds. For most scenarios, we expect the energy density spectrum to peak at values of ?gw(f )?1 0-12 ±2 . We discuss the applicability of this rule and the implications for gravitational-wave astronomy.

  16. Comparison of volumetric breast density estimations from mammography and thorax CT.

    PubMed

    Geeraert, N; Klausz, R; Cockmartin, L; Muller, S; Bosmans, H; Bloch, I

    2014-08-01

    Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations. PMID:25049219

  17. Comparison of volumetric breast density estimations from mammography and thorax CT

    NASA Astrophysics Data System (ADS)

    Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.

    2014-08-01

    Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.

  18. Quantitative estimates of root densities at minirhizotrons differ from those in the bulk soil

    Microsoft Academic Search

    Rose-Marie Rytter; Lars Rytter

    Aims  A key issue related to the usefulness of the minirhizotron technique is whether root presence and behaviour in the soil zone\\u000a at the minirhizotron interface are consistent with those in the bulk soil. We wanted to test the null hypotheses that there\\u000a were no differences in root densities or specific root length (SRL) between those positions. The effects of different

  19. Density and population estimate of gibbons ( Hylobates albibarbis) in the Sabangau catchment, Central Kalimantan, Indonesia

    Microsoft Academic Search

    Susan M. Cheyne; Claire J. H. Thompson; Abigail C. Phillips; Robyn M. C. Hill; Suwido H. Limin

    2008-01-01

    We demonstrate that although auditory sampling is a useful tool, this method alone will not provide a truly accurate indication\\u000a of population size, density and distribution of gibbons in an area. If auditory sampling alone is employed, we show that data\\u000a collection must take place over a sufficient period to account for variation in calling patterns across seasons. The population

  20. Solar flux estimated from electron density and ion composition measurements in the lower thermosphere

    Microsoft Academic Search

    P. Chakrabarty; D. K. Chakrabarty; A. K. Saha

    1977-01-01

    Appropriate models of solar flux in X-rays and Extreme Ultra Violet (XEUV) bands are presented in the light of the present status of ion chemistry in the region 90 to 130 km and reliable measurements of reaction rates, electron density, and ion composition. It was found that the EUV-flux of Schmidtke (1976) and the X-ray flux of Manson (1976) give