Science.gov

Sample records for wavelet-based density estimation

  1. Wavelet-based density estimation for noise reduction in plasma simulations using particles

    SciTech Connect

    Nguyen van yen, Romain; Del-Castillo-Negrete, Diego B; Schneider, Kai; Farge, Marie; Chen, Guangye

    2010-01-01

    For given computational resources, one of the main limitations in the accuracy of plasma simulations using particles comes from the noise due to limited statistical sampling in the reconstruction of the particle distribution function. A method based on wavelet multiresolution analysis is proposed and tested to reduce this noise. The method, known as wavelet based density estimation (WBDE), was previously introduced in the statistical literature to estimate probability densities given a nite number of independent measurements. Its novel application to plasma simulations can be viewed as a natural extension of the nite size particles (FSP) approach, with the advantage of estimating more accurately distribution functions that have localized sharp features. The proposed method preserves the moments of the particle distribution function to a good level of accuracy, has no constraints on the dimensionality of the system, does not require an a priori selection of a global smoothing scale, and its able to adapt locally to the smoothness of the density based on the given discrete particle data. Most importantly, the computational cost of the denoising stage is of the same order as one timestep of a FSP simulation. The method is compared with a recently proposed proper orthogonal decomposition based method, and it is tested with particle data corresponding to strongly collisional, weakly collisional, and collisionless plasmas simulations.

  2. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  3. Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation

    NASA Astrophysics Data System (ADS)

    Lei, Sheau-Fang; Tung, Ying-Kai

    Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.

  4. Wavelet-based Poisson rate estimation using the Skellam distribution

    NASA Astrophysics Data System (ADS)

    Hirakawa, Keigo; Baqai, Farhan; Wolfe, Patrick J.

    2009-02-01

    Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements often exhibit heteroscedastic behavior. In particular, time series components and other measurements may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation method performed in certain wavelet/filterbank transform domains. This indicates a promising potential approach for denoising of Poisson counts in the above-mentioned applications.

  5. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  6. Measuring mass density and ultrasonic wave velocity: A wavelet-based method applied in ultrasonic reflection mode.

    PubMed

    Metwally, Khaled; Lefevre, Emmanuelle; Baron, Cécile; Zheng, Rui; Pithioux, Martine; Lasaygues, Philippe

    2016-02-01

    When assessing ultrasonic measurements of material parameters, the signal processing is an important part of the inverse problem. Measurements of thickness, ultrasonic wave velocity and mass density are required for such assessments. This study investigates the feasibility and the robustness of a wavelet-based processing (WBP) method based on a Jaffard-Meyer algorithm for calculating these parameters simultaneously and independently, using one single ultrasonic signal in the reflection mode. The appropriate transmitted incident wave, correlated with the mathematical properties of the wavelet decomposition, was determined using a adapted identification procedure to build a mathematically equivalent model for the electro-acoustic system. The method was tested on three groups of samples (polyurethane resin, bone and wood) using one 1-MHz transducer. For thickness and velocity measurements, the WBP method gave a relative error lower than 1.5%. The relative errors in the mass density measurements ranged between 0.70% and 2.59%. Despite discrepancies between manufactured and biological samples, the results obtained on the three groups of samples using the WBP method in the reflection mode were remarkably consistent, indicating that it is a reliable and efficient means of simultaneously assessing the thickness and the velocity of the ultrasonic wave propagating in the medium, and the apparent mass density of material. PMID:26403278

  7. Wavelet-based time-delay estimation for time-resolved turbulent flow analysis

    SciTech Connect

    Jakubowski, M.; Fonck, R. J.; Fenzi, C.; McKee, G. R.

    2001-01-01

    A wavelet-transform-based spectral analysis is examined for application to beam emission spectroscopy (BES) data to extract poloidal rotation velocity fluctuations from the density turbulence data. Frequency transfer functions for a wavelet cross-phase extraction method are calculated. Numerical noise is reduced by shifting the data to give an average zero time delay, and the applicable frequency range is extended by numerical oversampling of the measured density fluctuations. This approach offers potential for direct measurements of turbulent transport and detection of zonal flows in tokamak plasma turbulence.

  8. Wavelet based sparse source imaging technique.

    PubMed

    Ding, Lei; Zhu, Min; Liao, Ke

    2013-01-01

    The present study proposed a novel multi-resolution wavelet to efficiently compress cortical current densities on the highly convoluted cortical surface. The basis function of the proposed wavelet is supported on triangular faces of the cortical mesh and it is thus named as the face-based wavelet to be distinguished from other vertex-based wavelets. The proposed face-based wavelet was used as a transform to gain the sparse representation of cortical sources and then was integrated into the framework of L1-norm regularizations with the purpose to improve the performance of sparse source imaging (SSI) in solving EEG/MEG inverse problems. Monte Carlo simulations were conducted with multiple extended sources (up to ten) at random locations. Experimental MEG data from an auditory induced language task was further adopted to evaluate the performance of the proposed wavelet based SSI technique. The present results indicated that the face-based wavelet can efficiently compress cortical current densities and has better performance than the vertex-based wavelet in helping inverse source reconstructions in terms of estimation accuracies in source localization and source extent. Experimental results further indicated improved detection performance of the face-based wavelet as compared with the vertex-based wavelet in the framework of SSI. It thus suggests the proposed wavelet based SSI can become a promising tool in studying brain functions and networks. PMID:24110961

  9. Airborne Crowd Density Estimation

    NASA Astrophysics Data System (ADS)

    Meynberg, O.; Kuschk, G.

    2013-10-01

    This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

  10. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  11. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  12. Estimation of coastal density gradients

    NASA Astrophysics Data System (ADS)

    Howarth, M. J.; Palmer, M. R.; Polton, J. A.; O'Neill, C. K.

    2012-04-01

    Density gradients in coastal regions with significant freshwater input are large and variable and are a major control of nearshore circulation. However their measurement is difficult, especially where the gradients are largest close to the coast, with significant uncertainties because of a variety of factors - spatial and time scales are small, tidal currents are strong and water depths shallow. Whilst temperature measurements are relatively straightforward, measurements of salinity (the dominant control of spatial variability) can be less reliable in turbid coastal waters. Liverpool Bay has strong tidal mixing and receives fresh water principally from the Dee, Mersey, Ribble and Conwy estuaries, each with different catchment influences. Horizontal and vertical density gradients are variable both in space and time. The water column stratifies intermittently. A Coastal Observatory has been operational since 2002 with regular (quasi monthly) CTD surveys on a 9 km grid, an situ station, an instrumented ferry travelling between Birkenhead and Dublin and a shore-based HF radar system measuring surface currents and waves. These measurements are complementary, each having different space-time characteristics. For coastal gradients the ferry is particularly useful since measurements are made right from the mouth of Mersey. From measurements at the in situ site alone density gradients can only be estimated from the tidal excursion. A suite of coupled physical, wave and ecological models are run in association with these measurements. The models, here on a 1.8 km grid, enable detailed estimation of nearshore density gradients, provided appropriate river run-off data are available. Examples are presented of the density gradients estimated from the different measurements and models, together with accuracies and uncertainties, showing that systematic time series measurements within a few kilometres of the coast are a high priority. (Here gliders are an exciting prospect for detailed regular measurements to fill this gap.) The consequences for and sensitivity of circulation estimates are presented using both numerical and analytic models.

  13. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  14. Wavelet-based digital image watermarking.

    PubMed

    Wang, H J; Su, P C; Kuo, C C

    1998-12-01

    A wavelet-based watermark casting scheme and a blind watermark retrieval technique are investigated in this research. An adaptive watermark casting method is developed to first determine significant wavelet subbands and then select a couple of significant wavelet coefficients in these subbands to embed watermarks. A blind watermark retrieval technique that can detect the embedded watermark without the help from the original image is proposed. Experimental results show that the embedded watermark is robust against various signal processing and compression attacks. PMID:19384400

  15. Image denoising via Bayesian estimation of local variance with Maxwell density prior

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-10-01

    The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.

  16. Wavelet-based fractal image compression

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Zhai, Guangtao

    2003-09-01

    In this paper, a wavelet-based fractal image coding algorithm is proposed. The conventional fractal image coding in spatial domain is extended to wavelet domain by taking advantage of the self-similarities among different wavelet subtrees through proper affine transformation. This method is based on the combination of the theory of multi-resolution analysis with iterated function systems by introducing some effective block-classification schemes. The original image is first transformed into wavelet domain in which fractal compression and arithmetic coding are performed. By classifying D blocks and R blocks set in this domain, the approach can significantly reduce the computation complexity and encoding time. Meanwhile, the hybrid image compression algorithm obtains much better coding performance in terms of PSNR with error modification. This is the main advantage of this method. A set of experiments and simulations show the potentials of using these classification techniques in wavelet domain for futher improvements.

  17. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  18. Varying kernel density estimation on ℝ+

    PubMed Central

    Mnatsakanov, Robert; Sarkisian, Khachatur

    2015-01-01

    In this article a new nonparametric density estimator based on the sequence of asymmetric kernels is proposed. This method is natural when estimating an unknown density function of a positive random variable. The rates of Mean Squared Error, Mean Integrated Squared Error, and the L1-consistency are investigated. Simulation studies are conducted to compare a new estimator and its modified version with traditional kernel density construction. PMID:26740729

  19. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  20. Wavelet based fractal analysis of DNA sequences

    NASA Astrophysics Data System (ADS)

    Arneodo, A.; d'Aubenton-Carafa, Y.; Bacry, E.; Graves, P. V.; Muzy, J. F.; Thermes, C.

    The fractal scaling properties of DNA sequences are analyzed using the wavelet transform. Mapping nucleotide sequences onto a “DNA walk” produces fractal landscapes that can be studied quantitatively by applying the so-called wavelet transform modulus maxima method. This method provides a natural generalization of the classical box-counting techniques to fractal signals, the wavelets playing the role of “generalized oscillating boxes”. From the scaling behavior of partition functions that are defined from the wavelet transform modulus maxima, this method allows us to determine the singularity spectrum of the considered signal and thereby to achieve a complete multifractal analysis. Moreover, by considering analyzing wavelets that make the “wavelet transform microscope” blind to “patches” of different nucleotide composition that are observed in genomic sequences, we demonstrate and quantify the existence of long-range correlations in the noncoding regions. Although the fluctuations in the patchy landscape of the DNA walks reconstructed from both noncoding and (protein) coding regions are found homogeneous with Gaussian statistics, our wavelet-based analysis allows us to discriminate unambiguously between the fluctuations of the former which behave like fractional Brownian motions, from those of the latter which cannot be distinguished from uncorrelated random Brownian walks. We discuss the robustness of these results with respect to various legitimate codings of the DNA sequences. Finally, we comment about the possible understanding of the origin of the observed long-range correlations in noncoding DNA sequences in terms of the nonequilibrium dynamical processes that produce the “isochore structre of the genome”.

  1. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

  2. A wavelet-based two-stage near-lossless coder.

    PubMed

    Yea, Sehoon; Pearlman, William A

    2006-11-01

    In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given L(infinity) error bound in the pixel domain. We focus on the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate. Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer, the proposed method does not require iteration of decoding and inverse discrete wavelet transform in succession to locate the optimum bit rate. We propose a simple method to estimate the optimal bit rate, with a theoretical justification based on the critical rate argument from the rate-distortion theory and the independence of the residual error. PMID:17076407

  3. A wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Wang, Yuan 1Ji, Yanju 2Li, Suyi 13Lin, Jun 12Zhou, Fengdao 1Yang, Guihong

    2013-09-01

    A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an effective, practical method to remove the baseline drift of GREATEM signals and its performance is significantly superior to the interpolation method.

  4. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  5. A wavelet based investigation of long memory in stock returns

    NASA Astrophysics Data System (ADS)

    Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.

    2012-04-01

    Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.

  6. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  7. ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

    EPA Science Inventory

    An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

  8. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106

  9. Estimation of neural firing rate: the wavelet density estimation approach.

    PubMed

    Khorasani, Abed; Daliri, Mohammad Reza

    2013-08-01

    The computation of neural firing rates based on spike sequences has been introduced as a useful tool for extraction of an animal's behavior. Different estimating methods of such neural firing rates have been developed by neuroscientists, and among these methods, time histogram and kernel estimators have been used more than other approaches. In this paper, the problem in the estimation of firing rates using wavelet density estimators has been considered. The results of simulation study in estimation of underlying rates based on spike sequences sampled from two different variable firing rates show that the proposed wavelet density method provides better and more accurate estimation of firing rates with smooth results compared to two other classical approaches. Furthermore, the performance of a different family of wavelet density estimators in the estimation of the underlying firing rate of biological data have been compared with results of both time histogram and kernel estimators. All in all, the results show that the proposed method can be useful in the estimation of firing rate of neural spike trains. PMID:23924519

  10. Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints

    SciTech Connect

    I. W. Ginsberg

    1999-09-01

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.

  11. 3D Wavelet-Based Filter and Method

    SciTech Connect

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  12. Density estimation by maximum quantum entropy

    SciTech Connect

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-11-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets.

  13. Directional wavelet based features for colonic polyp classification.

    PubMed

    Wimmer, Georg; Tamaki, Toru; Tischendorf, J J W; Häfner, Michael; Yoshida, Shigeto; Tanaka, Shinji; Uhl, Andreas

    2016-07-01

    In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state-of-the-art methods on most of the databases. We will also show that the Weibull distribution is better suited to model the subband coefficient distribution than other commonly used probability distributions like the Gaussian distribution and the generalized Gaussian distribution. So this work gives a reasonable summary of wavelet based methods for colonic polyp classification and the huge amount of endoscopic polyp databases used for our experiments assures a high significance of the achieved results. PMID:26948110

  14. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  15. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  16. Conditional Density Estimation in Measurement Error Problems

    PubMed Central

    Wang, Xiao-Feng; Ye, Deping

    2014-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a “double asymptotic” view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  17. Selection of optimal wavelet bases for image compression using SPIHT algorithm

    NASA Astrophysics Data System (ADS)

    Rehman, Maria; Touqir, Imran; Batool, Wajiha

    2015-02-01

    This paper presents the performance of several wavelet basesin SPIHT coding. Two types of wavelet bases are tested for SPIHT algorithm i.e. orthogonal and biorthogonal wavelet bases. The results of using coefficients of these bases are compared on the basis of Compression Ratio and Peak Signal to Noise Ratio. The paper shows that use of biorthogonal wavelets bases is better than orthogonal wavelet bases. Out of biorthogonal wavelets, bior 4.4 shows good results in SPIHT coding.

  18. Stochastic model for estimation of environmental density

    SciTech Connect

    Janardan, K.G.; Uppuluri, V.R.R.

    1984-01-01

    The environment density has been defined as the value of a habitat expressing its unfavorableness for settling of an individual which has a strong anti-social tendency to other individuals in an environment. Morisita studied anti-social behavior of ant-lions (Glemuroides japanicus) and provided a recurrence relation without an explicit solution for the probability distribution of individuals settling in each of two habitats in terms of the environmental densities and the numbers of individuals introduced. In this paper the recurrence relation is explicitly solved; certain interesting properties of the distribution are discussed including the estimation of the parameters. 4 references, 1 table.

  19. Density Estimation Framework for Model Error Assessment

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.

    2014-12-01

    In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.

  20. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

  1. Adaptive wavelet-based framework for aeroelastic simulations

    NASA Astrophysics Data System (ADS)

    Nair, Raj; Vasilyev, Oleg

    2014-11-01

    This study presents the novel adaptive wavelet-based framework for modeling fluid-structure interaction. The approach uses the adaptive wavelet collocation method to solve the linear-elastic structural deformation equations inside the solid obstacle and compressible Navier-Stokes equations in the outer fluid region. The method then combines two mathematical approaches: volume penalization for creating a fluid-structure coupling by specifying traction condition on the solid boundary and enforcing the no-slip velocity conditions consistent with the rate of structural deformation on the obstacle boundary and a level-set-method, which dynamically tracks the solid-fluid interface. The method is applied to a two-dimensional aeroelastic flow and preliminary results are discussed. This work serves as the basis for continuing development of a robust adaptive wavelet based fluid-structure interaction model to accurately model the effects of unsteady aerodynamic loads in aeroelastic problems.

  2. Wavelet-based image registration with JPEG2000 compressed imagery

    NASA Astrophysics Data System (ADS)

    Campbell, Derrick S.; Reynolds, William D., Jr.

    2008-04-01

    This paper describes a registration algorithm for aligning large frame imagery compressed with the JPEG2000 compression standard. The images are registered in the compressed domain using wavelet-based techniques. Unlike traditional approaches, our proposed method eliminates the need to reconstruct the full image prior to performing registration. The proposed method is highly scalable allowing registration to be performed on selectable resolution levels, quality layers, and regions of interest. The use of the hierarchical nature of the wavelet transform also allows for the trade-off between registration accuracy and processing speed. We present the results from our simulations to demonstrate the feasibility of the proposed technique in real-world scenarios with streaming sources. The wavelet-based approach maintains compatibility with JPEG2000 and enables additional features not offered by traditional approaches.

  3. Analysis of a wavelet-based robust hash algorithm

    NASA Astrophysics Data System (ADS)

    Meixner, Albert; Uhl, Andreas

    2004-06-01

    This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

  4. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  5. Wavelet-based verification of the quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  6. A Wavelet-Based Assessment of Topographic-Isostatic Reductions for GOCE Gravity Gradients

    NASA Astrophysics Data System (ADS)

    Grombein, Thomas; Luo, Xiaoguang; Seitz, Kurt; Heck, Bernhard

    2014-07-01

    Gravity gradient measurements from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE) contain significant high- and mid-frequency signal components, which are primarily caused by the attraction of the Earth's topographic and isostatic masses. In order to mitigate the resulting numerical instability of a harmonic downward continuation, the observed gradients can be smoothed with respect to topographic-isostatic effects using a remove-compute-restore technique. For this reason, topographic-isostatic reductions are calculated by forward modeling that employs the advanced Rock-Water-Ice methodology. The basis of this approach is a three-layer decomposition of the topography with variable density values and a modified Airy-Heiskanen isostatic concept incorporating a depth model of the Mohorovičić discontinuity. Moreover, tesseroid bodies are utilized for mass discretization and arranged on an ellipsoidal reference surface. To evaluate the degree of smoothing via topographic-isostatic reduction of GOCE gravity gradients, a wavelet-based assessment is presented in this paper and compared with statistical inferences in the space domain. Using the Morlet wavelet, continuous wavelet transforms are applied to measured GOCE gravity gradients before and after reducing topographic-isostatic signals. By analyzing a representative data set in the Himalayan region, an employment of the reductions leads to significantly smoothed gradients. In addition, smoothing effects that are invisible in the space domain can be detected in wavelet scalograms, making a wavelet-based spectral analysis a powerful tool.

  7. Traffic characterization and modeling of wavelet-based VBR encoded video

    SciTech Connect

    Yu Kuo; Jabbari, B.; Zafar, S.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  8. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  9. Template-free wavelet-based detection of local symmetries.

    PubMed

    Puspoki, Zsuzsanna; Unser, Michael

    2015-10-01

    Our goal is to detect and group different kinds of local symmetries in images in a scale- and rotation-invariant way. We propose an efficient wavelet-based method to determine the order of local symmetry at each location. Our algorithm relies on circular harmonic wavelets which are used to generate steerable wavelet channels corresponding to different symmetry orders. To give a measure of local symmetry, we use the F-test to examine the distribution of the energy across different channels. We provide experimental results on synthetic images, biological micrographs, and electron-microscopy images to demonstrate the performance of the algorithm. PMID:26011883

  10. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027

  11. Perceptually lossless wavelet-based compression for medical images

    NASA Astrophysics Data System (ADS)

    Lin, Nai-wen; Yu, Tsaifa; Chan, Andrew K.

    1997-05-01

    In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.

  12. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  13. Wavelet-based image compression using fixed residual value

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2000-12-01

    Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.

  14. Characterizing cerebrovascular dynamics with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Sindeev, S. S.; Pavlova, O. N.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.

    2016-01-01

    Using the wavelet-transform modulus maxima (WTMM) approach we study the dynamics of cerebral blood flow (CBF) in rats aiming to reveal responses of macro- and microcerebral circulations to changes in the peripheral blood pressure. We show that the wavelet-based multifractal formalism allows quantifying essentially different reactions in the CBF-dynamics at the level of large and small cerebral vessels. We conclude that unlike the macrocirculation that is nearly insensitive to increased peripheral blood pressure, the microcirculation is characterized by essential changes of the CBF-complexity.

  15. Wavelet based characterization of ex vivo vertebral trabecular bone structure with 3T MRI compared to microCT

    SciTech Connect

    Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

    2005-04-11

    Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

  16. Majorization-minimization algorithms for wavelet-based image restoration.

    PubMed

    Figueiredo, Mário A T; Bioucas-Dias, José M; Nowak, Robert D

    2007-12-01

    Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios. PMID:18092597

  17. A wavelet-based approach to fall detection.

    PubMed

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the "prototype fall".In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  18. A Wavelet-Based Approach to Fall Detection

    PubMed Central

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  19. A New Wavelet Based Approach to Assess Hydrological Models

    NASA Astrophysics Data System (ADS)

    Adamowski, J. F.; Rathinasamy, M.; Khosa, R.; Nalley, D.

    2014-12-01

    In this study, a new wavelet based multi-scale performance measure (Multiscale Nash Sutcliffe Criteria, and Multiscale Normalized Root Mean Square Error) for hydrological model comparison was developed and tested. The new measure provides a quantitative measure of model performance across different timescales. Model and observed time series are decomposed using the a trous wavelet transform, and performance measures of the model are obtained at each time scale. The usefulness of the new measure was tested using real as well as synthetic case studies. The real case studies included simulation results from the Soil Water Assessment Tool (SWAT), as well as statistical models (the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods). Data from India and Canada were used. The synthetic case studies included different kinds of errors (e.g., timing error, as well as under and over prediction of high and low flows) in outputs from a hydrologic model. It was found that the proposed wavelet based performance measures (i.e., MNSC and MNRMSE) are a more reliable measure than traditional performance measures such as the Nash Sutcliffe Criteria, Root Mean Square Error, and Normalized Root Mean Square Error. It was shown that the new measure can be used to compare different hydrological models, as well as help in model calibration.

  20. Mammographic Density Estimation with Automated Volumetric Breast Density Measurement

    PubMed Central

    Ko, Su Yeon; Kim, Eun-Kyung; Kim, Min Jung

    2014-01-01

    Objective To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. Materials and Methods In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. Results The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p = 0.001 to 0.015). Conclusion There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density. PMID:24843235

  1. Passive microrheology of soft materials with atomic force microscopy: A wavelet-based spectral analysis

    NASA Astrophysics Data System (ADS)

    Martinez-Torres, C.; Arneodo, A.; Streppa, L.; Argoul, P.; Argoul, F.

    2016-01-01

    Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale method to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.

  2. Toward Estimating Current Densities in Magnetohydrodynamic Generators

    NASA Astrophysics Data System (ADS)

    Bokil, V. A.; Gibson, N. L.; McGregor, D. A.; Woodside, C. R.

    2015-09-01

    We investigate the idea of reconstructing current densities in a magnetohydrodynamic (MHD) generator channel from external magnetic flux density measurements in order to determine the existence and location of damaging arcs. We model the induced fields, which are usually neglected in low magnetic Reynold's number flows, using a natural fixed point iteration. Further we present a sensitivity analysis of induced fields to current density profiles in a 3D, yet simplified model.

  3. Concrete density estimation by rebound hammer method

    NASA Astrophysics Data System (ADS)

    Ismail, Mohamad Pauzi bin; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin

    2016-01-01

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite.

  4. An Adaptive Wavelet-Based Denoising Algorithm for Enhancing Speech in Non-stationary Noise Environment

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Ching

    Traditional wavelet-based speech enhancement algorithms are ineffective in the presence of highly non-stationary noise because of the difficulties in the accurate estimation of the local noise spectrum. In this paper, a simple method of noise estimation employing the use of a voice activity detector is proposed. We can improve the output of a wavelet-based speech enhancement algorithm in the presence of random noise bursts according to the results of VAD decision. The noisy speech is first preprocessed using bark-scale wavelet packet decomposition (BSWPD) to convert a noisy signal into wavelet coefficients (WCs). It is found that the VAD using bark-scale spectral entropy, called as BS-Entropy, parameter is superior to other energy-based approach especially in variable noise-level. The wavelet coefficient threshold (WCT) of each subband is then temporally adjusted according to the result of VAD approach. In a speech-dominated frame, the speech is categorized into either a voiced frame or an unvoiced frame. A voiced frame possesses a strong tone-like spectrum in lower subbands, so that the WCs of lower-band must be reserved. On the contrary, the WCT tends to increase in lower-band if the speech is categorized as unvoiced. In a noise-dominated frame, the background noise can be almost completely removed by increasing the WCT. The objective and subjective experimental results are then used to evaluate the proposed system. The experiments show that this algorithm is valid on various noise conditions, especially for color noise and non-stationary noise conditions.

  5. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  6. Wavelet-based lossless compression of coronary angiographic images.

    PubMed

    Munteanu, A; Cornelis, J; Cristea, P

    1999-03-01

    The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's). PMID:10363705

  7. Wavelet-based Image Compression using Subband Threshold

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2002-11-01

    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  8. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense. PMID:16948297

  9. Optimal Wavelet-Based Compression of PIV Images

    NASA Astrophysics Data System (ADS)

    Naguib, Ahmed; Humphreys, William

    2000-11-01

    Wavelet-based image compression/de-compression algorithms were developed for minimizing the size of stored PIV images while maximizing image fidelity. A common characteristic of multi-level wavelet analysis is the attainment of the same amount of compression using different distributions of the discarded wavelet coefficients among the multiple levels of analysis. It was found, however, that only one of these distributions corresponds to maximum retained image energy. To determine the optimal distribution of discarded wavelet coefficients in an automated, time-efficient manner, simplified models of typical PIV image histograms were utilized. Interrogation of compressed/de-compressed standard PIV images using the developed algorithms were conducted. Results show that compression levels of at least 1:4 can be achieved without sacrificing the accuracy of the calculated vector fields.

  10. Dynamic wavelet-based tool for gearbox diagnosis

    NASA Astrophysics Data System (ADS)

    Omar, Farag K.; Gaouda, A. M.

    2012-01-01

    This paper proposes a novel wavelet-based technique for detecting and localizing gear tooth defects in a noisy environment. The proposed technique utilizes a dynamic windowing process while analyzing gearbox vibration signals in the wavelet domain. The gear vibration signal is processed through a dynamic Kaiser's window of varying parameters. The window size, shape, and sliding rate are modified towards increasing the similarity between the non-stationary vibration signal and the selected mother wavelet. The window parameters are continuously modified until they provide maximum wavelet coefficients localized at the defected tooth. The technique is applied on laboratory data corrupted with high noise level. The technique has shown accurate results in detecting and localizing gear tooth fracture with different damage severity.

  11. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  12. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  13. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  14. Coarse-to-fine wavelet-based airport detection

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun

    2015-10-01

    Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.

  15. Feature-oriented multiple description wavelet-based image coding.

    PubMed

    Liu, Yilong; Oraintara, Soontorn

    2007-01-01

    We address the problem of resilient image coding over error-prone networks where packet losses occur. Recent literature highlights the multiple description coding (MDC) as a promising approach to solve this problem. In this paper, we introduce a novel wavelet-based multiple description image coder, referred to as the feature-oriented MDC (FO-MDC). The proposed multiple description (MD) coder exploits the statistics of the wavelet coefficients and identifies the subsets of samples that are sensitive to packet loss. A joint optimization between tree-pruning and quantizer selection in the rate-distortion sense is used in order to allocate more bits to these sensitive coefficients. When compared with the state-of-the-art MD scalar quantization coder, the proposed FO-MDC yields a more efficient central-side distortion tradeoff control mechanism. Furthermore, it proves to be more robust for image transmission even with high packet loss ratios, which makes it suitable for protecting multimedia streams over packet-erasure channels. PMID:17283771

  16. A wavelet-based feature vector model for DNA clustering.

    PubMed

    Bao, J P; Yuan, R Y

    2015-01-01

    DNA data are important in the bioinformatic domain. To extract useful information from the enormous collection of DNA sequences, DNA clustering is often adopted to efficiently deal with DNA data. The alignment-free method is a very popular way of creating feature vectors from DNA sequences, which are then used to compare DNA similarities. This paper proposes a wavelet-based feature vector (WFV) model, which is also an alignment-free method. From the perspective of signal processing, a DNA sequence is a sequence of digital signals. However, most traditional alignment-free models only extract features in the time domain. The WFV model uses discrete wavelet transform to adaptively yield feature vectors with a fixed dimension based on the features in both the time and frequency domains. The level of wavelet transform is adjusted according to the length of the DNA sequence rather than a fixed manually set value. The WFV model prefers a 32-dimension feature vector, which greatly promotes system performance. We compared the WFV model with the other five alignment-free models, i.e., k-tuple, DMK, TSM, AMI, and CV, on several large-scale DNA datasets on the DNA clustering application by means of the K-means algorithm. The experimental results showed that the WFV model outperformed the other models in terms of both the clustering results and the running time. PMID:26782569

  17. Geometric methods for wavelet-based image compression

    NASA Astrophysics Data System (ADS)

    Wakin, Michael B.; Romberg, Justin K.; Choi, Hyeokho; Baraniuk, Richard G.

    2003-11-01

    Natural images can be viewed as combinations of smooth regions, textures, and geometry. Wavelet-based image coders, such as the space-frequency quantization (SFQ) algorithm, provide reasonably efficient representations for smooth regions (using zerotrees, for example) and textures (using scalar quantization) but do not properly exploit the geometric regularity imposed on wavelet coefficients by features such as edges. In this paper, we develop a representation for wavelet coefficients in geometric regions based on the wedgelet dictionary, a collection of geometric atoms that construct piecewise-linear approximations to contours. Our wedgeprint representation implicitly models the coherency among geometric wavelet coefficients. We demonstrate that a simple compression algorithm combining wedgeprints with zerotrees and scalar quantization can achieve near-optimal rate-distortion performance D(R) ~ (log R)2/R2 for the class of piecewise-smooth images containing smooth C2 regions separated by smooth C2 discontinuities. Finally, we extend this simple algorithm and propose a complete compression framework for natural images using a rate-distortion criterion to balance the three representations. Our Wedgelet-SFQ (WSFQ) coder outperforms SFQ in terms of visual quality and mean-square error.

  18. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  19. Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

    NASA Astrophysics Data System (ADS)

    Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

    2011-05-01

    Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

  20. Density Ratio Estimation: A New Versatile Tool for Machine Learning

    NASA Astrophysics Data System (ADS)

    Sugiyama, Masashi

    A new general framework of statistical data processing based on the ratio of probability densities has been proposed recently and gathers a great deal of attention in the machine learning and data mining communities [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. This density ratio framework includes various statistical data processing tasks such as non-stationarity adaptation [18,1,2,4,13], outlier detection [19,20,21,6], and conditional density estimation [22,23,24,15]. Furthermore, mutual information—which plays a central role in information theory [25]—can also be estimated via density ratio estimation. Since mutual information is a measure of statistical independence between random variables [26,27,28], density ratio estimation can be used also for variable selection [29,7,11], dimensionality reduction [30,16], and independent component analysis [31,12].

  1. Efficient backward-propagation using wavelet-based filtering for fiber backward-propagation.

    PubMed

    Goldfarb, Gilad; Li, Guifang

    2009-05-25

    With the goal of reducing the number of operations required for digital backward-propagation used for fiber impairment compensation, wavelet-based filtering is presented. The wavelet-based design relies on signal decomposition using time-limited basis functions and hence is more compatible with the dispersion operator, which is also time-limited. This is in comparison with inverse-Fourier filter design which by definition is not time-limited due to the use of harmonic basis functions for signal decomposition. Artificial, after-the-fact windowing may be employed in this case; however only a limited amount of saving in the number of operations can be achieved, compared to the wavelets-base filter design. Wavelet-based filter design procedure and numerical simulations which validate this approach are presented in this paper. PMID:19466131

  2. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  3. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  4. A wavelet-based approach to face verification/recognition

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah; Sellahewa, Harin

    2005-10-01

    Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

  5. Nonparametric probability density estimation for data analysis in several dimensions

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Nonparametric probability density estimates, in particular the corresponding contour curves, it is shown, are a useful adjunct to scatter diagrams when performing a preliminary examination of a set of random data in several dimensions.

  6. Estimating Geometric Dislocation Densities in Polycrystalline Materialsfrom Orientation Imaging Microscopy

    SciTech Connect

    Man, Chi-Sing; Gao, Xiang; Godefroy, Scott; Kenik, Edward A

    2010-01-01

    Herein we consider polycrystalline materials which can be taken as statistically homogeneous and whose grains can be adequately modeled as rigid-plastic. Our objective is to obtain, from orientation imaging microscopy (OIM), estimates of geometrically necessary dislocation (GND) densities.

  7. Atmospheric density estimation using satellite precision orbit ephemerides

    NASA Astrophysics Data System (ADS)

    Arudra, Anoop Kumar

    The current atmospheric density models are not capable enough to accurately model the atmospheric density, which varies continuously in the upper atmosphere mainly due to the changes in solar and geomagnetic activity. Inaccurate atmospheric modeling results in erroneous density values that are not accurate enough to calculate the drag estimates acting on a satellite, thus leading to errors in the prediction of satellite orbits. This research utilized precision orbit ephemerides (POE) data from satellites in an orbit determination process to make corrections to existing atmospheric models, thus resulting in improved density estimates. The work done in this research made corrections to the Jacchia family atmospheric models and Mass Spectrometer Incoherent Scatter (MSIS) family atmospheric models using POE data from the Ice, Cloud and Land Elevation Satellite (ICESat) and the Terra Synthetic Aperture Radar-X Band (TerraSAR-X) satellite. The POE data obtained from these satellites was used in an orbit determination scheme which performs a sequential filter/smoother process to the measurements and generates corrections to the atmospheric models to estimate density. This research considered several days from the year 2001 to 2008 encompassing all levels of solar and geomagnetic activity. Density and ballistic coefficient half-lives with values of 1.8, 18, and 180 minutes were used in this research to observe the effect of these half-life combinations on density estimates. This research also examined the consistency of densities derived from the accelerometers of the Challenging Mini Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites by Eric Sutton, from the University of Colorado. The accelerometer densities derived by Sutton were compared with those derived by Sean Bruinsma from CNES, Department of Terrestrial and Planetary Geodesy, France. The Sutton densities proved to be nearly identical to the Bruinsma densities for all the cases considered in this research, thus suggesting that Sutton densities can be used as a substitute for Bruinsma densities in validating the POE density estimates for future work. Density estimates were found using the ICESat and TerraSAR-X POE data by generating corrections to the CIRA-72 and NRLMSISE-00 atmospheric density models. The ICESat and TerraSAR-X POE density estimates obtained were examined and studied by comparing them with the density estimates obtained using CHAMP and GRACE POE data. The trends in how POE density estimates varied for all four satellites were found to be the same or similar. The comparisons were made for different baseline atmospheric density models, different density and ballistic coefficient correlated half-lives, and for varying levels of solar and geomagnetic activity. The comparisons in this research help in understanding the variation of density estimates for various satellites with different altitudes and orbits.

  8. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  9. Wavelet-based AR-SVM for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Kim, Yeesock; Chong, Jo Woon; Chon, Ki H.; Kim, JungMi

    2013-01-01

    This paper proposes a novel structural health monitoring framework for damage detection of smart structures. The framework is developed through the integration of the discrete wavelet transform, an autoregressive (AR) model, damage-sensitive features, and a support vector machine (SVM). The steps of the method are the following: (1) the wavelet-based AR (WAR) model estimates vibration signals obtained from both the undamaged and damaged smart structures under a variety of random signals; (2) a new damage-sensitive feature is formulated in terms of the AR parameters estimated from the structural velocity responses; and then (3) the SVM is applied to each group of damaged and undamaged data sets in order to optimally separate them into either damaged or healthy groups. To demonstrate the effectiveness of the proposed structural health monitoring framework, a three-story smart building equipped with a magnetorheological (MR) damper under artificial earthquake signals is studied. It is shown from the simulation that the proposed health monitoring scheme is effective in detecting damage of the smart structures in an efficient way.

  10. Evaluation of wolf density estimation from radiotelemetry data

    USGS Publications Warehouse

    Burch, J.W.; Adams, L.G.; Follmann, E.H.; Rexstad, E.A.

    2005-01-01

    Density estimation of wolves (Canis lupus) requires a count of individuals and an estimate of the area those individuals inhabit. With radiomarked wolves, the count is straightforward but estimation of the area is more difficult and often given inadequate attention. The population area, based on the mosaic of pack territories, is influenced by sampling intensity similar to the estimation of individual home ranges. If sampling intensity is low, population area will be underestimated and wolf density will be inflated. Using data from studies in Denali National Park and Preserve, Alaska, we investigated these relationships using Monte Carlo simulation to evaluate effects of radiolocation effort and number of marked packs on density estimation. As the number of adjoining pack home ranges increased, fewer relocations were necessary to define a given percentage of population area. We present recommendations for monitoring wolves via radiotelemetry.

  11. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

  12. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  13. The estimation of body density in rugby union football players.

    PubMed Central

    Bell, W

    1995-01-01

    The general regression equation of Durnin and Womersley for estimating body density from skinfold thicknesses in young men, was examined by comparing the estimated density from this equation, with the measured density of a group of 45 rugby union players of similar age. Body density was measured by hydrostatic weighing with simultaneous measurement of residual volume. Additional measurements included stature, body mass and skinfold thicknesses at the biceps, triceps, subscapular and suprailiac sites. The estimated density was significantly different from the measured density (P < 0.001), equivalent to a mean overestimation of relative fat of approximately 4%. A new set of prediction equations for estimating density was formulated from linear regression using the logarithm of single and sums of skinfold thicknesses. Equations were derived from a validation sample (n = 22) and tested on a crossvalidation sample (n = 23). The standard error of the estimate (s.e.e.) of the equations ranged from 0.0058 to 0.0062 g ml-1. The derived equations were successfully crossvalidated. Differences between measured and estimated densities were not significant (P > 0.05), total errors ranging from 0.0067 to 0.0092 g ml-1. An exploratory assessment was also made of the effect of fatness and aerobic fitness on the prediction equations. The equations should be applied to players of similar age and playing ability, and for the purpose of identifying group characteristics. Application of the equations to individuals may give rise to errors of between -3.9% to +2.5% total body fat in two-thirds of cases. PMID:7788218

  14. The estimation of body density in rugby union football players.

    PubMed

    Bell, W

    1995-03-01

    The general regression equation of Durnin and Womersley for estimating body density from skinfold thicknesses in young men, was examined by comparing the estimated density from this equation, with the measured density of a group of 45 rugby union players of similar age. Body density was measured by hydrostatic weighing with simultaneous measurement of residual volume. Additional measurements included stature, body mass and skinfold thicknesses at the biceps, triceps, subscapular and suprailiac sites. The estimated density was significantly different from the measured density (P < 0.001), equivalent to a mean overestimation of relative fat of approximately 4%. A new set of prediction equations for estimating density was formulated from linear regression using the logarithm of single and sums of skinfold thicknesses. Equations were derived from a validation sample (n = 22) and tested on a crossvalidation sample (n = 23). The standard error of the estimate (s.e.e.) of the equations ranged from 0.0058 to 0.0062 g ml-1. The derived equations were successfully crossvalidated. Differences between measured and estimated densities were not significant (P > 0.05), total errors ranging from 0.0067 to 0.0092 g ml-1. An exploratory assessment was also made of the effect of fatness and aerobic fitness on the prediction equations. The equations should be applied to players of similar age and playing ability, and for the purpose of identifying group characteristics. Application of the equations to individuals may give rise to errors of between -3.9% to +2.5% total body fat in two-thirds of cases. PMID:7788218

  15. Ultrasonic velocity for estimating density of structural ceramics

    NASA Technical Reports Server (NTRS)

    Klima, S. J.; Watson, G. K.; Herbell, T. P.; Moore, T. J.

    1981-01-01

    The feasibility of using ultrasonic velocity as a measure of bulk density of sintered alpha silicon carbide was investigated. The material studied was either in the as-sintered condition or hot isostatically pressed in the temperature range from 1850 to 2050 C. Densities varied from approximately 2.8 to 3.2 g cu cm. Results show that the bulk, nominal density of structural grade silicon carbide articles can be estimated from ultrasonic velocity measurements to within 1 percent using 20 MHz longitudinal waves and a commercially available ultrasonic time intervalometer. The ultrasonic velocity measurement technique shows promise for screening out material with unacceptably low density levels.

  16. Multibaseline polarimetric synthetic aperture radar tomography of forested areas using wavelet-based distribution compressive sensing

    NASA Astrophysics Data System (ADS)

    Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong

    2015-01-01

    The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.

  17. Wavelet-based enhancement of signal-averaged electrocardiograms for late potential detection.

    PubMed

    Rakotomamonjy, A; Coast, D; Marché, P

    1999-11-01

    An optimal wavelet filter to improve the signal-to-noise ratio (SNR) of the signal-averaged electrocardiogram is described. As the averaging technique leads to the best unbiased estimator, the challenge is to attenuate the noise while preserving the low amplitude signals that are usually embedded in it. An optimal, in the mean-square sense, wavelet-based filter has been derived from the model of the signal. However, such a filter needs exact knowledge of the noise statistic and the noise-free signal. Hence, to implement such a filter, a method based on successive sub-averaging and wavelet filtering is proposed. Its performance was evaluated using simulated and real ECGs. An improvement in SNR of between 6 and 10 dB can be achieved compared to a classical averaging technique which uses an ensemble of 64 simulated ECG beats. Tests on real ECGs demonstrate the utility of the method as it has been shown that by using fewer beats in the filtered ensemble average, one can achieve the same noise reduction. Clinical use of this technique would reduce the ensemble needed for averaging while obtaining the same diagnostic result. PMID:10723883

  18. Comparison of neuron selection algorithms of wavelet-based neural network

    NASA Astrophysics Data System (ADS)

    Mei, Xiaodan; Sun, Sheng-He

    2001-09-01

    Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.

  19. Fast wavelet-based image characterization for highly adaptive image retrieval.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Cochener, Béatrice; Roux, Christian

    2012-04-01

    Adaptive wavelet-based image characterizations have been proposed in previous works for content-based image retrieval (CBIR) applications. In these applications, the same wavelet basis was used to characterize each query image: This wavelet basis was tuned to maximize the retrieval performance in a training data set. We take it one step further in this paper: A different wavelet basis is used to characterize each query image. A regression function, which is tuned to maximize the retrieval performance in the training data set, is used to estimate the best wavelet filter, i.e., in terms of expected retrieval performance, for each query image. A simple image characterization, which is based on the standardized moments of the wavelet coefficient distributions, is presented. An algorithm is proposed to compute this image characterization almost instantly for every possible separable or nonseparable wavelet filter. Therefore, using a different wavelet basis for each query image does not considerably increase computation times. On the other hand, significant retrieval performance increases were obtained in a medical image data set, a texture data set, a face recognition data set, and an object picture data set. This additional flexibility in wavelet adaptation paves the way to relevance feedback on image characterization itself and not simply on the way image characterizations are combined. PMID:22194244

  20. Resistivity and density logs key to fluid pressure estimates

    SciTech Connect

    Stein, N.

    1985-04-08

    A mathematical model is developed here to estimate the fluid pressure in a sand having shales above and below the sand. Estimates are based on resistivity and density-log data for the shale. the procedure should be adaptable to the use of data from commercially available measurements-while-drilling packages. It follows that the model and calculation procedure can be of special help when considering drilling programs for exploration wells in geopressured areas. No previously made correlation is needed when using resistivity and density-log data to estimate fluid pressures. An interval of shales which is normally pressured is not required to estimate fluid pressures in overpressured intervals. Narrow intervals of shales are preferred for each group of data to be analyzed for overpressure. Estimates of pressure must be based on average properties if long intervals of shale are considered.

  1. Atmospheric Density Corrections Estimated from Fitted Drag Coefficients

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T. F.; Mance, S. R.; Mehta, P.

    2010-12-01

    Fitted drag coefficients estimated using GEODYN, the NASA Goddard Space Flight Center Precision Orbit Determination and Geodetic Parameter Estimation Program, are used to create density corrections. The drag coefficients were estimated for Stella, Starlette and GFZ using satellite laser ranging (SLR) measurements; and for GEOSAT Follow-On (GFO) using SLR, Doppler, and altimeter crossover measurements. The data analyzed covers years ranging from 2000 to 2004 for Stella and Starlette, 2000 to 2002 and 2005 for GFO, and 1995 to 1997 for GFZ. The drag coefficient was estimated every eight hours. The drag coefficients over the course of a year show a consistent variation about the theoretical and yearly average values that primarily represents a semi-annual/seasonal error in the atmospheric density models used. The atmospheric density models examined were NRLMSISE-00 and MSIS-86. The annual structure of the major variations was consistent among all the satellites for a given year and consistent among all the years examined. The fitted drag coefficients can be converted into density corrections every eight hours along the orbit of the satellites. In addition, drag coefficients estimated more frequently can provide a higher frequency of density correction.

  2. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  3. Open-cluster density profiles derived using a kernel estimator

    NASA Astrophysics Data System (ADS)

    Seleznev, Anton F.

    2016-03-01

    Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density profile for the cluster core and the uniform sphere for the cluster corona is shown to be a better approximation of the surface radial density profile.The King function alone does not reproduce surface density profiles of sample clusters properly. The number of stars, the cluster masses and the tidal radii in the Galactic gravitational field for the sample clusters are estimated. It is shown that NGC 6819 and 6939 are extended beyond their tidal surfaces.

  4. Density estimation using KNN and a potential model

    NASA Astrophysics Data System (ADS)

    Lu, Yonggang; Qiao, Jiangang; Liao, Li; Yang, Wuyang

    2013-10-01

    Density-based clustering methods are usually more adaptive than other classical methods in that they can identify clusters of various shapes and can handle noisy data. A novel density estimation method is proposed using both the knearest neighbor (KNN) graph and a hypothetical potential field of the data points to capture the local and global data distribution information respectively. An initial density score computed using KNN is used as the mass of the data point in computing the potential values. Then the computed potential is used as the new density estimation, from which the final clustering result is derived. All the parameters used in the proposed method are determined from the input data automatically. The new clustering method is evaluated by comparing with K-means++, DBSCAN, and CSPV. The experimental results show that the proposed method can determine the number of clusters automatically while producing competitive clustering results compared to the other three methods.

  5. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  6. Wavelet-based analogous phase scintillation index for high latitudes

    NASA Astrophysics Data System (ADS)

    Ahmed, A.; Tiwari, R.; Strangeways, H. J.; Dlay, S.; Johnsen, M. G.

    2015-08-01

    The Global Positioning System (GPS) performance at high latitudes can be severely affected by the ionospheric scintillation due to the presence of small-scale time-varying electron density irregularities. In this paper, an improved analogous phase scintillation index derived using the wavelet-transform-based filtering technique is presented to represent the effects of scintillation regionally at European high latitudes. The improved analogous phase index is then compared with the original analogous phase index and the phase scintillation index for performance comparison using 1 year of data from Trondheim, Norway (63.41°N, 10.4°E). This index provides samples at a 1 min rate using raw total electron content (TEC) data at 1 Hz for the prediction of phase scintillation compared to the scintillation monitoring receivers (such as NovAtel Global Navigation Satellite Systems Ionospheric Scintillation and TEC Monitor receivers) which operate at 50 Hz rate and are thus rather computationally intensive. The estimation of phase scintillation effects using high sample rate data makes the improved analogous phase index a suitable candidate which can be used in regional geodetic dual-frequency-based GPS receivers to efficiently update the tracking loop parameters based on tracking jitter variance.

  7. Wavelet Based Analytical Expressions to Steady State Biofilm Model Arising in Biochemical Engineering.

    PubMed

    Padma, S; Hariharan, G

    2016-06-01

    In this paper, we have developed an efficient wavelet based approximation method to biofilm model under steady state arising in enzyme kinetics. Chebyshev wavelet based approximation method is successfully introduced in solving nonlinear steady state biofilm reaction model. To the best of our knowledge, until now there is no rigorous wavelet based solution has been addressed for the proposed model. Analytical solutions for substrate concentration have been derived for all values of the parameters δ and SL. The power of the manageable method is confirmed. Some numerical examples are presented to demonstrate the validity and applicability of the wavelet method. Moreover the use of Chebyshev wavelets is found to be simple, efficient, flexible, convenient, small computation costs and computationally attractive. PMID:26661721

  8. Estimating Density Using Precision Satellite Orbits from Multiple Satellites

    NASA Astrophysics Data System (ADS)

    McLaughlin, Craig A.; Lechtenberg, Travis; Fattig, Eric; Krishna, Dhaval Mysore

    2012-06-01

    This article examines atmospheric densities estimated using precision orbit ephemerides (POE) from several satellites including CHAMP, GRACE, and TerraSAR-X. The results of the calibration of atmospheric densities along the CHAMP and GRACE-A orbits derived using POEs with those derived using accelerometers are compared for various levels of solar and geomagnetic activity to examine the consistency in calibration between the two satellites. Densities from CHAMP and GRACE are compared when GRACE is orbiting nearly directly above CHAMP. In addition, the densities derived simultaneously from CHAMP, GRACE-A, and TerraSAR-X are compared to the Jacchia 1971 and NRLMSISE-00 model densities to observe altitude effects and consistency in the offsets from the empirical models among all three satellites.

  9. An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments

    PubMed Central

    Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro

    2013-01-01

    In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

  10. An infrastructureless approach to estimate vehicular density in urban environments.

    PubMed

    Sanguesa, Julio A; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J; Cano, Juan-Carlos; Calafate, Carlos T; Manzoni, Pietro

    2013-01-01

    In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

  11. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  12. Face Value: Towards Robust Estimates of Snow Leopard Densities

    PubMed Central

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682

  13. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  14. Density estimation in tiger populations: combining information for strong inference

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  15. Ionospheric electron density profile estimation using commercial AM broadcast signals

    NASA Astrophysics Data System (ADS)

    Yu, De; Ma, Hong; Cheng, Li; Li, Yang; Zhang, Yufeng; Chen, Wenjun

    2015-08-01

    A new method for estimating the bottom electron density profile by using commercial AM broadcast signals as non-cooperative signals is presented in this paper. Without requiring any dedicated transmitters, the required input data are the measured elevation angles of signals transmitted from the known locations of broadcast stations. The input data are inverted for the QPS model parameters depicting the electron density profile of the signal's reflection area by using a probabilistic inversion technique. This method has been validated on synthesized data and used with the real data provided by an HF direction-finding system situated near the city of Wuhan. The estimated parameters obtained by the proposed method have been compared with vertical ionosonde data and have been used to locate the Shijiazhuang broadcast station. The simulation and experimental results indicate that the proposed ionospheric sounding method is feasible for obtaining useful electron density profiles.

  16. Estimation of Enceladus Plume Density Using Cassini Flight Data

    NASA Technical Reports Server (NTRS)

    Wang, Eric K.; Lee, Allan Y.

    2011-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.

  17. Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds

    NASA Astrophysics Data System (ADS)

    Arshakian, Tigran G.; Ossenkopf, Volker

    2016-01-01

    Aims: We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well-defined spatial properties, we calibrate the method and test its limitations to apply it ultimately to a set of observed maps. Methods: We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. Results: We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behaviour without prominent scales. Observational noise and a finite map size limit the scales on which the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from observational noise is negligible for signal-to-noise ratios ≳5. We propose an approach for the identification of correlated structures in the maps, which allows us to localize individual correlated structures and recognize their shapes and suggest a recipe for recovering enhanced scales in self-similar structures. The application of the WWCC to the observed line maps of the giant molecular cloud G 333 allows us to add specific scale information to the results obtained earlier using the principal component analysis. The WWCC confirms the chemical and excitation similarity of 13CO and C18O on all scales, but shows a deviation of HCN at scales of up to 7 pc. This can be interpreted as a chemical transition scale. The largest structures also show a systematic offset along the filament, probably due to a large-scale density gradient. Conclusions: The WWCC can compare correlated structures in different maps of molecular clouds identifying scales that represent structural changes, such as chemical and phase transitions and prominent or enhanced dimensions.

  18. Estimating podocyte number and density using a single histologic section.

    PubMed

    Venkatareddy, Madhusudan; Wang, Su; Yang, Yan; Patel, Sanjeevkumar; Wickman, Larysa; Nishizono, Ryuzoh; Chowdhury, Mahboob; Hodgin, Jeffrey; Wiggins, Paul A; Wiggins, Roger C

    2014-05-01

    The reduction in podocyte density to levels below a threshold value drives glomerulosclerosis and progression to ESRD. However, technical demands prohibit high-throughput application of conventional morphometry for estimating podocyte density. We evaluated a method for estimating podocyte density using single paraffin-embedded formalin-fixed sections. Podocyte nuclei were imaged using indirect immunofluorescence detection of antibodies against Wilms' tumor-1 or transducin-like enhancer of split 4. To account for the large size of podocyte nuclei in relation to section thickness, we derived a correction factor given by the equation CF=1/(D/T+1), where T is the tissue section thickness and D is the mean caliper diameter of podocyte nuclei. Normal values for D were directly measured in thick tissue sections and in 3- to 5-μm sections using calibrated imaging software. D values were larger for human podocyte nuclei than for rat or mouse nuclei (P<0.01). In addition, D did not vary significantly between human kidney biopsies at the time of transplantation, 3-6 months after transplantation, or with podocyte depletion associated with transplant glomerulopathy. In rat models, D values also did not vary with podocyte depletion, but increased approximately 10% with old age and in postnephrectomy kidney hypertrophy. A spreadsheet with embedded formulas was created to facilitate individualized podocyte density estimation upon input of measured values. The correction factor method was validated by comparison with other methods, and provided data comparable with prior data for normal human kidney transplant donors. This method for estimating podocyte density is applicable to high-throughput laboratory and clinical use. PMID:24357669

  19. Quantitative volumetric breast density estimation using phase contrast mammography

    NASA Astrophysics Data System (ADS)

    Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A.; D'Isidoro, Fabio; Stampanoni, Marco

    2015-05-01

    Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p=0.033 ) and AP view (p=0.001 ). A linear regression between the results of the CC view and AP view showed a correlation coefficient γ = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.

  20. The Effect of Lidar Point Density on LAI Estimation

    NASA Astrophysics Data System (ADS)

    Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

    2013-12-01

    Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful in forest inventory applications and tree health evaluations. However, such high-density data is often not available over large areas. In this study we progressively downsampled the high-density discrete lidar data and evaluated the effect on LAI estimation. The AccuPAR data was used as validation and results were compared to existing LAI metrics. This will enable us to determine the minimum point density required for airborne lidar LAI retrieval. Preliminary results show that the data may be substantially thinned to estimate site-level LAI. More detailed results will be presented at the conference.

  1. Wall-resolved adaptive simulation with spatially-anisotropic wavelet-based refinement

    NASA Astrophysics Data System (ADS)

    de Stefano, Giuliano; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    In the wavelet-based adaptive multi-resolution approach to turbulence simulation, the separation between resolved energetic structures and unresolved flow is achieved through wavelet threshold filtering. Depending on the thresholding level, the effect of residual motions can be either neglected or modeled, leading to wavelet-based adaptive DNS or LES. Due to the ability to identify and efficiently represent energetic dynamically important flow structures, these methods have been proven reliable and effective for the computational modeling of wall-bounded turbulence. The wall-resolved adaptive approach however necessitates the use of high spatial resolution in the wall region, which practically limits the application to moderate Reynolds numbers. In order to address this issue, a new method that makes use of a spatially-anisotropic adaptive wavelet transform on curvilinear grids is introduced. In contrast to all known adaptive wavelet-based approaches that suffer from the ``curse of anisotropy,'' i.e., isotropic wavelet refinement and inability to have spatially varying aspect ratio of the mesh elements, this approach utilizes spatially-anisotropic wavelet-based refinement. The method is tested for the turbulent flow past a rectangular cylinder at moderately high Reynolds number. This work was supported by NSF under grant No. CBET-1236505.

  2. Wavelet-Based Image Enhancement in X-Ray Imaging and Tomography

    NASA Astrophysics Data System (ADS)

    Bronnikov, Andrei V.; Duifhuis, Gerrit

    1998-07-01

    We consider an application of the wavelet transform to image processing in x-ray imaging and three-dimensional (3-D) tomography aimed at industrial inspection. Our experimental setup works in two operational modes digital radiography and 3-D cone-beam tomographic data acquisition. Although the x-ray images measured have a large dynamic range and good spatial resolution, their noise properties and contrast are often not optimal. To enhance the images, we suggest applying digital image processing by using wavelet-based algorithms and consider the wavelet-based multiscale edge representation in the framework of the Mallat and Zhong approach IEEE Trans. Pattern Anal. Mach. Intell. 14, 710 (1992) . A contrast-enhancement method by use of equalization of the multiscale edges is suggested. Several denoising algorithms based on modifying the modulus and the phase of the multiscale gradients and several contrast-enhancement techniques applying linear and nonlinear multiscale edge stretching are described and compared by use of experimental data. We propose the use of a filter bank of wavelet-based reconstruction filters for the filtered-backprojection reconstruction algorithm. Experimental results show a considerable increase in the performance of the whole x-ray imaging system for both radiographic and tomographic modes in the case of the application of the wavelet-based image-processing algorithms.

  3. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system. PMID:17357726

  4. Can modeling improve estimation of desert tortoise population densities?

    USGS Publications Warehouse

    Nussear, K.E.; Tracy, C.R.

    2007-01-01

    The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

  5. Some Bayesian statistical techniques useful in estimating frequency and density

    USGS Publications Warehouse

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  6. A contact algorithm for density-based load estimation.

    PubMed

    Bona, Max A; Martin, Larry D; Fischer, Kenneth J

    2006-01-01

    An algorithm, which includes contact interactions within a joint, has been developed to estimate the dominant loading patterns in joints based on the density distribution of bone. The algorithm is applied to the proximal femur of a chimpanzee, gorilla and grizzly bear and is compared to the results obtained in a companion paper that uses a non-contact (linear) version of the density-based load estimation method. Results from the contact algorithm are consistent with those from the linear method. While the contact algorithm is substantially more complex than the linear method, it has some added benefits. First, since contact between the two interacting surfaces is incorporated into the load estimation method, the pressure distributions selected by the method are more likely indicative of those found in vivo. Thus, the pressure distributions predicted by the algorithm are more consistent with the in vivo loads that were responsible for producing the given distribution of bone density. Additionally, the relative positions of the interacting bones are known for each pressure distribution selected by the algorithm. This should allow the pressure distributions to be related to specific types of activities. The ultimate goal is to develop a technique that can predict dominant joint loading patterns and relate these loading patterns to specific types of locomotion and/or activities. PMID:16439233

  7. Estimating black bear density using DNA data from hair snares

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  8. Volume estimation of multi-density nodules with thoracic CT

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas

    2014-03-01

    The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.

  9. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  10. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  11. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a combination of the degree of sorting and the packing density% is more useful proxy for testing the degree of anisotropy and clustering in a point distribution.

  12. Hierarchical Multiscale Adaptive Variable Fidelity Wavelet-based Turbulence Modeling with Lagrangian Spatially Variable Thresholding

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza

    The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work. The scalability and speedup studies of this powerful parallel PDE solver are performed on various architectures. Furthermore, Reynolds scaling of active spatial modes of both CVS and SCALES of linearly forced homogeneous turbulence at high Reynolds numbers is investigated for the first time. This computational complexity study, by demonstrating very promising slope for Reynolds scaling of SCALES even at constant level of fidelity for SGS dissipation, proves the argument that SCALES as a dynamically adaptive turbulence modeling technique, can offer a plethora of flexibilities in hierarchical multiscale space/time adaptive variable fidelity simulations of high Reynolds number turbulent flows.

  13. Effect of Random Clustering on Surface Damage Density Estimates

    SciTech Connect

    Matthews, M J; Feit, M D

    2007-10-29

    Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

  14. Estimation of Volumetric Breast Density from Digital Mammograms

    NASA Astrophysics Data System (ADS)

    Alonzo-Proulx, Olivier

    Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.

  15. New temporal filtering scheme to reduce delay in wavelet-based video coding.

    PubMed

    Seran, Vidhya; Kondi, Lisimachos P

    2007-12-01

    Scalability is an important desirable property of video codecs. Wavelet-based motion-compensated temporal filtering provides the most powerful scheme for scalable video coding and provides high-compression efficiency that competes with the current state of art codecs. However, the delay introduced by the temporal filtering schemes is sometimes very high, which makes them unsuitable for many real-time applications. In this paper, ue propose a new temporal filter set to minimize delay in 3-D wavelet-based video coding. The new filter set gives a performance at par with existing longer filters. The length of the filter can vary from two to any number of frames depending on delay requirements. If the frames are processed as separate groups of frames (GOFs), the proposed filter set will not have any boundary effects at the GOF. Experimental results are presented and conclusions are drawn. PMID:18092592

  16. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    SciTech Connect

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.

    2016-01-01

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which can be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.

  17. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGESBeta

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  18. Wavelet-Based Real-Time Diagnosis of Complex Systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Mackey, Ryan

    2003-01-01

    A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

  19. The analysis of VF and VT with wavelet-based Tsallis information measure [rapid communication

    NASA Astrophysics Data System (ADS)

    Huang, Hai; Xie, Hongbo; Wang, Zhizhong

    2005-03-01

    We undertake the study of ventricular fibrillation and ventricular tachycardia by recourse to wavelet-based multiresolution analysis. Comparing with conventional Shannon entropy analysis of signal, we proposed a new application of Tsallis entropy analysis. It is shown that, as a criteria for detecting between ventricular fibrillation and ventricular tachycardia, Tsallis' multiresolution entropy (MRET) provides one with better discrimination power than the Shannon's multiresolution entropy (MRE).

  20. Wavelet-based image compression using shuffling and bit plane correlation

    NASA Astrophysics Data System (ADS)

    Kim, Seungjong; Jeong, Jechang

    2000-12-01

    In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.

  1. Research of the wavelet based ECW remote sensing image compression technology

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

    2007-11-01

    This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

  2. Analysis of wavelet-based denoising techniques as applied to a radar signal pulse

    NASA Astrophysics Data System (ADS)

    Steinbrunner, Lori A.; Scarpino, Frank F.

    1999-09-01

    The purpose of the research is to study the effects of three wavelet-based denoising techniques on the structure of a radar signal pulse. The radar signal pulse is 50 microsecond(s) ec in duration with 2.0 MHz of Linear Frequency Modulation on Pulse. The Signal-to-Noise Ratio of the signal is fixed at 0.7. The comparison is accomplished in the time-domain and the FFT domain. In addition, the output from a FM Demodulator is examined. The comparisons are performed based upon MSE calculations and a visual inspection of the resulting signals. A comparison between the results outlined above and an ideal bandpass filter is also performed. A final comparison is discussed which compares the wavelet- based results outlined above and the results obtained from a bandpass filter that are offset in center frequency. The wavelet-based techniques can be shown to provide an advantage in visually detecting the radar signal pulse in low SNR environments over the results obtained from a bandpass filter approach in which the ideal filter characteristics are not known. All work is accomplished in MATLABTM.

  3. Wavelet-based nearest-regularized subspace for noise-robust hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Wei; Liu, Kui; Su, Hongjun

    2014-01-01

    A wavelet-based nearest-regularized-subspace classifier is proposed for noise-robust hyperspectral image (HSI) classification. The nearest-regularized subspace, coupling the nearest-subspace classification with a distance-weighted Tikhonov regularization, was designed to only consider the original spectral bands. Recent research found that the multiscale wavelet features [e.g., extracted by redundant discrete wavelet transformation (RDWT)] of each hyperspectral pixel are potentially very useful and less sensitive to noise. An integration of wavelet-based features and the nearest-regularized-subspace classifier to improve the classification performance in noisy environments is proposed. Specifically, wealthy noise-robust features provided by RDWT based on hyperspectral spectrum are employed in a decision-fusion system or as preprocessing for the nearest-regularized-subspace (NRS) classifier. Improved performance of the proposed method over the conventional approaches, such as support vector machine, is shown by testing several HSIs. For example, the NRS classifier performed with an accuracy of 65.38% for the AVIRIS Indian Pines data with 75 training samples per class under noisy conditions (signal-to-noise ratio=36.87 dB), while the wavelet-based classifier can obtain an accuracy of 71.60%, resulting in an improvement of approximately 6%.

  4. Wavelet-based stereo images reconstruction using depth images

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2007-09-01

    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance measures.

  5. Numerical Modeling of Global Atmospheric Chemical Transport with Wavelet-based Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2012-12-01

    In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as they circle the globe. Recently, it was demonstrated that the present global CTMs implemented on quasi-uniform grids are incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. On the contrary, the adaptive wavelet technique is shown to produce highly accurate numerical solutions at a relatively low computational cost. It is demonstrated that the developed WAMR method has significant advantages over conventional non-adaptive computational techniques in terms of accuracy and computational cost for calculations of atmospheric chemical transport numerical. The simulations show excellent ability of the algorithm to adapt the computational grid to a solution containing different scales at different spatial locations so as to produce accurate results at a relatively low computational cost. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

  6. Atmospheric turbulence mitigation using complex wavelet-based fusion.

    PubMed

    Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R

    2013-06-01

    Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359

  7. Wavelet-based coherence measures of global seismic noise properties

    NASA Astrophysics Data System (ADS)

    Lyubushin, A. A.

    2015-04-01

    The coherent behavior of four parameters characterizing the global field of low-frequency (periods from 2 to 500 min) seismic noise is studied. These parameters include generalized Hurst exponent, multifractal singularity spectrum support width, the normalized entropy of variance, and kurtosis. The analysis is based on the data from 229 broadband stations of GSN, GEOSCOPE, and GEOFON networks for a 17-year period from the beginning of 1997 to the end of 2013. The entire set of stations is subdivided into eight groups, which, taken together, provide full coverage of the Earth. The daily median values of the studied noise parameters are calculated in each group. This procedure yields four 8-dimensional time series with a time step of 1 day with a length of 6209 samples in each scalar component. For each of the four 8-dimensional time series, a multiple correlation measure is estimated, which is based on computing robust canonical correlations for the Haar wavelet coefficients at the first detail level within a moving time window of the length 365 days. These correlation measures for each noise property demonstrate essential increasing starting from 2007 to 2008 which was continued till the end of 2013. Taking into account a well-known phenomenon of noise correlation increasing before catastrophes, this increasing of seismic noise synchronization is interpreted as indicators of the strongest (magnitudes not less than 8.5) earthquakes activation which is observed starting from the Sumatra mega-earthquake of 26 Dec 2004. This synchronization continues growing up to the end of the studied period (2013), which can be interpreted as a probable precursor of the further increase in the intensity of the strongest earthquakes all over the world.

  8. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGESBeta

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  9. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  10. Estimating tropical-forest density profiles from multibaseline interferometric SAR

    NASA Technical Reports Server (NTRS)

    Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

    2006-01-01

    Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

  11. An Adaptive Background Subtraction Method Based on Kernel Density Estimation

    PubMed Central

    Lee, Jeisung; Park, Mignon

    2012-01-01

    In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

  12. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  13. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    PubMed

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates. PMID:26729880

  14. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    NASA Technical Reports Server (NTRS)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  15. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  16. Optimal block boundary pre/postfiltering for wavelet-based image and video compression.

    PubMed

    Liang, Jie; Tu, Chengjie; Tran, Trac D

    2005-12-01

    This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467

  17. Wavelet-based speckle noise reduction in ultrasound B-scan images.

    PubMed

    Rakotomamonjy, A; Deforge, P; Marché, P

    2000-04-01

    Speckle noise is known to be signal-dependent in ultrasound imaging. Hence, separating noise from signal becomes a difficult task. This paper describes a wavelet-based method for reducing speckle noise. We derive from the model of the displayed ultrasound image the optimal wavelet-domain filter in the least mean-square sense. Simulations on synthetic data have been carried out in order to assess the performance of the proposed filter with regards to the classical wavelet shrinkage scheme, while phantom and tissue images have been used for testing it on real data. The results show that the filter effectively reduces the speckle noise while preserving resolvable details. PMID:11061460

  18. Iterated denoising and fusion to improve the image quality of wavelet-based coding

    NASA Astrophysics Data System (ADS)

    Song, Beibei

    2011-06-01

    An iterated denoising and fusion method is presented to improve the image quality of wavelet-based coding. Firstly, iterated image denoising is used to reduce ringing and staircase noise along curving edges and improve edge regularity. Then, we adopt wavelet fusion method to enhance image edges, protect non-edge regions and decrease blurring artifacts during the process of denoising. Experimental results have shown that the proposed scheme is capable of improving both the subjective and the objective performance of wavelet decoders, such as JPEG2000 and SPIHT.

  19. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2006-02-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  20. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  1. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    PubMed Central

    G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

  2. Wavelet-based multiscale anisotropic diffusion for speckle reduction and edge enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Niu, Ruiqing; Wu, Ke; Yu, Xin

    2009-10-01

    In order to improve signal-to-noise ratio (SNR) and image quality, this paper introduces a wavelet-based multiscale anisotropic diffusion algorithm to remove speckle noise and enhance edges. In our algorithm, we use the tool of wavelet to construct a linear scale-space for the speckle image. Due to the smoothing functionality of the scaling function, the wavelet-based multiscale representation of the speckle image is much more stationary than the raw speckle image. Noise is mostly located in the finest scale and tends to decrease as the scale increases. Furthermore, a robust speckle reduction anisotropic diffusion (SRAD) is to be proposed and we perform the improved SRAD on the stationary scale-space rather than on the rough speckle image domain. Qualitative experiments based on a speckle Synthetic aperture radar (SAR) image show the elegant characteristics of edge-preserving filtering versus the traditional adaptive filters. Quantitative analyses, based on the first order statistics and Equivalent Number of Looks, confirm the validity and effectiveness of the proposed algorithm.

  3. Wavelet-Based Method for Instability Analysis in Boiling Water Reactors

    SciTech Connect

    Espinosa-Paredes, Gilberto; Prieto-Guerrero, Alfonso; Nunez-Carrera, Alejandro; Amador-Garcia, Rodolfo

    2005-09-15

    This paper introduces a wavelet-based method to analyze instability events in a boiling water reactor (BWR) during transient phenomena. The methodology to analyze BWR signals includes the following: (a) the short-time Fourier transform (STFT) analysis, (b) decomposition using the continuous wavelet transform (CWT), and (c) application of multiresolution analysis (MRA) using discrete wavelet transform (DWT). STFT analysis permits the study, in time, of the spectral content of analyzed signals. The CWT provides information about ruptures, discontinuities, and fractal behavior. To detect these important features in the signal, a mother wavelet has to be chosen and applied at several scales to obtain optimum results. MRA allows fast implementation of the DWT. Features like important frequencies, discontinuities, and transients can be detected with analysis at different levels of detail coefficients. The STFT was used to provide a comparison between a classic method and the wavelet-based method. The damping ratio, which is an important stability parameter, was calculated as a function of time. The transient behavior can be detected by analyzing the maximum contained in detail coefficients at different levels in the signal decomposition. This method allows analysis of both stationary signals and highly nonstationary signals in the timescale plane. This methodology has been tested with the benchmark power instability event of Laguna Verde nuclear power plant (NPP) Unit 1, which is a BWR-5 NPP.

  4. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  5. Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.

    PubMed

    Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

  6. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  7. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  8. An Enhanced Resolution Quikscat Derived Antarctic Melt Record (1999-2009): Development and Evaluation of Wavelet-Based Methods

    NASA Astrophysics Data System (ADS)

    Steiner, N.; Tedesco, M.

    2011-12-01

    We present results concerning spatio-temporal variability of melting at a spatial resolution of ~2.5 km over Antarctica estimated from a spatially enhanced QuikSCAT dataset distributed by the NASA Scatterometer Climate Record Pathfinder (SCP), at Bingham Young University (Utah, USA). We report melting trends at both regional and continental scales as well as results over selected regions of Antarctica (e.g., Peninsula). Estimates date of melt onset (MO) and duration (MD) are obtained from either spatio-temporal dynamic and fixed-threshold based methods. The latter method assumes that melting occurs when backscatter values decrease below the average wintertime value minus a fixed threshold of 3dB. In the dynamic approach, a continuous wavelet transform is applied to the time series of seasonal backscatter, at each pixel. As opposed to other studies reported in the literature making use of wavelet-based thresholds, no a-priori information on expected change in backscatter over a melt season is used in our approach. Measurement noise and short-duration, non-melting related backscatter perturbations are isolated to fine dyadic scales, while sustained melt-related changes in backscatter produce wavelet coefficient maxima that increase in absolute magnitude and extend to larger dyadic scales. We compare the outputs of the above-mentioned algorithms with those obtained from the analysis of surface temperature (10m) provided by automated weather stations (AWS- AMRC, SSEC, UW-Madison). We also compare the results derived from QuikSCAT with those obtained from passive microwave observations (SSM/I). We finally illustrate the linkages between large-scale atmospheric circulation patterns (e.g., Southern Annular Mode, SAM and the El Niño-Southern Oscillation, ENSO), and Antarctic melt extent and duration at the enhanced spatial resolution.

  9. On the analysis of wavelet-based approaches for print grain artifacts

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed H.; Cooper, Brian E.; Rippetoe, Edward E.

    2013-01-01

    Grain is one of several attributes described in ISO/IEC TS 24790, a technical specification for the measurement of image quality for monochrome printed output. It defines grain as aperiodic fluctuations of lightness greater than 0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC 13660. Since this definition places no bounds on the upper frequency range, higher-frequency fluctuations (such as those from the printer's halftone pattern) could contribute significantly to the measurement of grain artifacts. In a previous publication, we introduced a modification to the ISO/IEC 13660 grain measurement algorithm that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations. This modification improves the algorithm's correlation with the subjective evaluation of experts who rated the severity of printed grain artifacts. Seeking to improve upon the grain algorithm in ISO/IEC 13660, the ISO/IEC TS 24790 committee evaluated several graininess metrics. This led to the selection of the above wavelet-based approach as the top candidate algorithm for inclusion in a future ISO/IEC standard. Our recent experimental results showed r2 correlation of 0.9278 between the wavelet-based approach and the subjective evaluation conducted by the ISO committee members based upon 26 samples covering a variety of printed grain artifacts. On the other hand, our experiments on the same data set showed much lower correlation (r2 = 0.3555) between the ISO/IEC 13660 approach and the same subjective evaluation of the ISO committee members. In addition, we introduce an alternative approach for measuring grain defects based on spatial frequency analysis of wavelet-filtered images. Our goal is to establish a link between the spatial-based grain (ISO/IEC TS 24790) approach and its equivalent frequency-based one in light of Parseval's theorem. Our experimental results showed r2 correlation near 0.99 between the spatial and frequency-based approaches.

  10. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    PubMed

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology. PMID:26236833

  11. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  12. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  13. A linear quality control design for high efficient wavelet-based ECG data compression.

    PubMed

    Hung, King-Chu; Tsai, Chin-Feng; Ku, Cheng-Tung; Wang, Huan-Sheng

    2009-05-01

    In ECG data compression, maintaining reconstructed signal with desired quality is crucial for clinical application. In this paper, a linear quality control design based on the reversible round-off non-recursive discrete periodized wavelet transform (RRO-NRDPWT) is proposed for high efficient ECG data compression. With the advantages of error propagation resistance and octave coefficient normalization, RRO-NRDPWT enables the non-linear quantization control to obtain an approximately linear distortion by using a single control variable. Based on the linear programming, a linear quantization scale prediction model is presented for the quality control of reconstructed ECG signal. Following the use of the MIT-BIH arrhythmia database, the experimental results show that the proposed system, with lower computational complexity, can obtain much better quality control performance than that of other wavelet-based systems. PMID:19070935

  14. An efficient wavelet-based approximation method to gene propagation model arising in population biology.

    PubMed

    Rajaraman, R; Hariharan, G

    2014-07-01

    In this paper, we have applied an efficient wavelet-based approximation method for solving the Fisher's type and the fractional Fisher's type equations arising in biological sciences. To the best of our knowledge, until now there is no rigorous wavelet solution has been addressed for the Fisher's and fractional Fisher's equations. The highest derivative in the differential equation is expanded into Legendre series; this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the Fisher's equation and the fractional Fisher's equation are converted into a system of algebraic equations. Block-pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24908255

  15. Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis

    PubMed Central

    Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

    2014-01-01

    To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

  16. Evaluation of a new wavelet-based compression algorithm for synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Tian, Jun; Guo, Haitao; Wells, Raymond O., Jr.; Burrus, C. Sidney; Odegard, Jan E.

    1996-06-01

    In this paper we will discuss the performance of a new wavelet based embedded compression algorithm on synthetic aperture radar (SAR) image data. This new algorithm uses index coding on the indices of the discrete wavelet transform of the image data and provides an embedded code to successively approximate it. Results on compressing still images, medical images as well as seismic traces indicate that the new algorithm performs quite competitively with other image compression algorithms. The evaluation for SAR image compression of it will be presented in this paper. One advantage of the new algorithm presented here is that the compressed data is encoded in such a way as to facilitate processing in the compressed wavelet domain, which is a significant aspect considering the rate at which SAR data is collected and the desire to process the data 'near real time'.

  17. Wavelet-based Poisson Solver for use in Particle-In-CellSimulations

    SciTech Connect

    Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

    2005-05-13

    We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.

  18. Multichannel EEG compression: wavelet-based image and volumetric coding approach.

    PubMed

    Srinivasan, K; Dauwels, J; Ramasubba, M R

    2013-01-01

    In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram signals (EEG) are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of lossy plus residual coding, consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately. PMID:22510952

  19. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  20. Corrosion in reinforced concrete panels: wireless monitoring and wavelet-based analysis.

    PubMed

    Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

    2014-01-01

    To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

  1. A wavelet-based watermarking algorithm for ownership verification of digital images.

    PubMed

    Wang, Yiwei; Doherty, John F; Van Dyck, Robert E

    2002-01-01

    Access to multimedia data has become much easier due to the rapid growth of the Internet. While this is usually considered an improvement of everyday life, it also makes unauthorized copying and distributing of multimedia data much easier, therefore presenting a challenge in the field of copyright protection. Digital watermarking, which is inserting copyright information into the data, has been proposed to solve the problem. In this paper, we first discuss the features that a practical digital watermarking system for ownership verification requires. Besides perceptual invisibility and robustness, we claim that the private control of the watermark is also very important. Second, we present a novel wavelet-based watermarking algorithm. Experimental results and analysis are then given to demonstrate that the proposed algorithm is effective and can be used in a practical system. PMID:18244614

  2. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  3. Design of wavelet-based ECG detector for implantable cardiac pacemakers.

    PubMed

    Min, Young-Jae; Kim, Hoon-Ki; Kang, Yu-Ri; Kim, Gil-Su; Park, Jongsun; Kim, Soo-Won

    2013-08-01

    A wavelet Electrocardiogram (ECG) detector for low-power implantable cardiac pacemakers is presented in this paper. The proposed wavelet-based ECG detector consists of a wavelet decomposer with wavelet filter banks, a QRS complex detector of hypothesis testing with wavelet-demodulated ECG signals, and a noise detector with zero-crossing points. In order to achieve high detection accuracy with low power consumption, a multi-scaled product algorithm and soft-threshold algorithm are efficiently exploited in our ECG detector implementation. Our algorithmic and architectural level approaches have been implemented and fabricated in a standard 0.35 μm CMOS technology. The testchip including a low-power analog-to-digital converter (ADC) shows a low detection error-rate of 0.196% and low power consumption of 19.02 μW with a 3 V supply voltage. PMID:23893202

  4. Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.

    2013-12-01

    The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.

  5. EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS1

    PubMed Central

    Mukherjee, Gourab; Johnstone, Iain M.

    2015-01-01

    We consider estimating the predictive density under Kullback–Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates. PMID:26448678

  6. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.

  7. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  8. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-01

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data. PMID:26305670

  9. Estimated global nitrogen deposition using NO2 column density

    USGS Publications Warehouse

    Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

    2013-01-01

    Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m−2 yr−1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m−2 yr−1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

  10. An adaptive technique for estimating the atmospheric density profile during the AE mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.

    1973-01-01

    A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.

  11. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  12. RADIATION PRESSURE DETECTION AND DENSITY ESTIMATE FOR 2011 MD

    SciTech Connect

    Micheli, Marco; Tholen, David J.; Elliott, Garrett T. E-mail: tholen@ifa.hawaii.edu

    2014-06-10

    We present our astrometric observations of the small near-Earth object 2011 MD (H ∼ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5σ. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of ρ=(640±330)kg m{sup −3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

  13. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Robbins, C.S.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  14. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans. PMID:27195799

  15. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions.

    PubMed

    Daqrouq, K; Dobaie, A

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  16. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. PMID:26774422

  17. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  18. Radiation dose reduction in digital radiography using wavelet-based image processing methods

    NASA Astrophysics Data System (ADS)

    Watanabe, Haruyuki; Tsai, Du-Yih; Lee, Yongbum; Matsuyama, Eri; Kojima, Katsuyuki

    2011-03-01

    In this paper, we investigate the effect of the use of wavelet transform for image processing on radiation dose reduction in computed radiography (CR), by measuring various physical characteristics of the wavelet-transformed images. Moreover, we propose a wavelet-based method for offering a possibility to reduce radiation dose while maintaining a clinically acceptable image quality. The proposed method integrates the advantages of a previously proposed technique, i.e., sigmoid-type transfer curve for wavelet coefficient weighting adjustment technique, as well as a wavelet soft-thresholding technique. The former can improve contrast and spatial resolution of CR images, the latter is able to improve the performance of image noise. In the investigation of physical characteristics, modulation transfer function, noise power spectrum, and contrast-to-noise ratio of CR images processed by the proposed method and other different methods were measured and compared. Furthermore, visual evaluation was performed using Scheffe's pair comparison method. Experimental results showed that the proposed method could improve overall image quality as compared to other methods. Our visual evaluation showed that an approximately 40% reduction in exposure dose might be achieved in hip joint radiography by using the proposed method.

  19. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  20. Finding the multipath propagation of multivariable crude oil prices using a wavelet-based network approach

    NASA Astrophysics Data System (ADS)

    Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun

    2016-04-01

    The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.

  1. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis.

    PubMed

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

  2. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions

    PubMed Central

    Daqrouq, K.; Dobaie, A.

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  3. Application of wavelet-based neural network on DNA microarray data.

    PubMed

    Lee, Jack; Zee, Benny

    2008-01-01

    The advantage of using DNA microarray data when investigating human cancer gene expressions is its ability to generate enormous amount of information from a single assay in order to speed up the scientific evaluation process. The number of variables from the gene expression data coupled with comparably much less number of samples creates new challenges to scientists and statisticians. In particular, the problems include enormous degree of collinearity among genes expressions, likely violation of model assumptions as well as high level of noise with potential outliers. To deal with these problems, we propose a block wavelet shrinkage principal component (BWSPCA) analysis method to optimize the information during the noise reduction process. This paper firstly uses the National Cancer Institute database (NC160) as an illustration and shows a significant improvement in dimension reduction. Secondly we combine BWSPCA with an artificial neural network-based gene minimization strategy to establish a Block Wavelet-based Neural Network model in a robust and accurate cancer classification process (BWNN). Our extensive experiments on six public cancer datasets have shown that the method of BWNN for tumor classification performed well, especially on some difficult instances with large-class (more than two) expression data. This proposed method is extremely useful for data denoising and is competitiveness with respect to other methods such as BagBoost, RandomForest (RanFor), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN). PMID:19255638

  4. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    PubMed

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824

  5. Selective error detection for error-resilient wavelet-based image coding.

    PubMed

    Karam, Lina J; Lam, Tuyet-Trang

    2007-12-01

    This paper introduces the concept of a similarity check function for error-resilient multimedia data transmission. The proposed similarity check function provides information about the effects of corrupted data on the quality of the reconstructed image. The degree of data corruption is measured by the similarity check function at the receiver, without explicit knowledge of the original source data. The design of a perceptual similarity check function is presented for wavelet-based coders such as the JPEG2000 standard, and used with a proposed "progressive similarity-based ARQ" (ProS-ARQ) scheme to significantly decrease the retransmission rate of corrupted data while maintaining very good visual quality of images transmitted over noisy channels. Simulation results with JPEG2000-coded images transmitted over the Binary Symmetric Channel, show that the proposed ProS-ARQ scheme significantly reduces the number of retransmissions as compared to conventional ARQ-based schemes. The presented results also show that, for the same number of retransmitted data packets, the proposed ProS-ARQ scheme can achieve significantly higher PSNR and better visual quality as compared to the selective-repeat ARQ scheme. PMID:18092593

  6. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis

    PubMed Central

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

  7. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  8. WCRP: a software development system for efficient wavelet-based image codec design

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Wang, Houng-Jyh M.; Kuo, C.-C. Jay

    1998-10-01

    A wavelet-based image codec compresses an image with three major steps: discrete wavelet transform, quantization and entropy coding. There are many variants in each step. In this research, we consider a versatile software development system called the wavelet compression research platform (WCRP). WCRP provides a framework to host components of all compression steps. For each compression stage, multiple components are developed and they are contained in WCRP. They include a selection of floating-point and integer filter sets, different transform strategies, a set of quantizers and two different arithmetic coders. A codec can be easily formed by picking up components in different stages. WCRP provides an excellent tool to test the performance of various image codec designs. In addition, WCRP is an extensible system, i.e., new components available in the future can be easily incorporated and quickly tested. It makes the development of new algorithms much easier. WCRP has been used in developing a family of new quantization algorithms that are based on the concept of Binary Description of multi-level wavelet coding objects. These quantization schemes can serve different applications, such as progressive fidelity coding, lossless coding and low complexity coding. Both progressive fidelity coding and lossless coding performance of our codec are among the best in its class. A codec of low implementational complexity is made possible by our memory-scalable quantization scheme.

  9. Online Epileptic Seizure Prediction Using Wavelet-Based Bi-Phase Correlation of Electrical Signals Tomography.

    PubMed

    Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh

    2015-09-01

    Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients. PMID:26126613

  10. An enhanced wavelet-based scheme for near lossless satellite image compression

    NASA Astrophysics Data System (ADS)

    Lin, Tsung-Ching; Chen, Chien-Wen; Chen, Shi-Huang; Truong, Trieu-Kien

    2009-08-01

    An enhanced wavelet-based compression scheme for satellite image is proposed in this paper. The Consultative Committee for Space Data System (CCSDS) presented a recommendation which utilizes the wavelet transform and the bit plane coder for satellite image compression. The bit plane coder used in the CCSDS recommendation encodes the coefficient block of bit planes one by one and then truncates the unnecessary bit plane coefficient blocks. By this way, the contexts of bit planes are not considered as the redundancy embedded data which may be compressed further. The proposed scheme uses a bit plane extractor to parse the differences of the original image data and its wavelet transformed coefficients. The output of bit plane extractor will be encoded by a run-length coder and will be sent to the communication channel with the CCSDS compressed data. Comparing with the recommendation of CCSDS, under a reasonable complexity, the subjective quality of the image will maintained or even better. In addition, the bit-rate can be further decreased from 85% to 95% of the CCSDS image compression recommendation at the similar objective quality level. By using the lower bit rate lossy mode compression and bit plane compensation, it is possible to obtain lower bit rate and higher quality image than which the higher bit rate lossy mode compression can achieve.

  11. Diagnostically lossless medical image compression via wavelet-based background noise removal

    NASA Astrophysics Data System (ADS)

    Qi, Xiaojun; Tyler, John M.; Pianykh, Oleg S.

    2000-04-01

    Diagnostically lossless compression techniques are essential in archival and communication of medical images. In this paper, an automated wavelet-based background noise removal method, i.e. diagnostically lossless compression method, is proposed. First, the wavelet transform modulus maxima procedure products the modulus maxima image which contains sharp changes in intensity that are used to locate the edges of the images. Then the Graham Scan algorithm is used to determine the convex hull of the wavelet modulus maxima image and extract the foreground of the image, which contains the entire diagnostic region of the image. Histogram analyses are applied to the non-diagnostic region, which is approximated by the image that is outside the convex hull. After setting all pixels in the non-diagnostic region to zero intensity, a higher compression ratio, without introducing loss of any data used for the diagnosis, is achieved with UNIX utilities compress and pack, and with lossless JPEG. Furthermore, an image of smaller rectangular region containing all the diagnostic region is constructed to further improve the compression ratio achieved.

  12. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  13. Incipient interturn fault diagnosis in induction machines using an analytic wavelet-based optimized Bayesian inference.

    PubMed

    Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan

    2014-05-01

    Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044

  14. A wavelet-based image quality metric for the assessment of 3D synthesized views

    NASA Astrophysics Data System (ADS)

    Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick

    2013-03-01

    In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.

  15. Wavelet-based detection of abrupt changes in natural frequencies of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, K.; Staszewski, W. J.; Basu, B.; Uhl, T.

    2015-12-01

    Detection of abrupt changes in natural frequencies from vibration responses of time-variant systems is a challenging task due to the complex nature of physics involved. It is clear that the problem needs to be analysed in the combined time-frequency domain. The paper proposes an application of the input-output wavelet-based Frequency Response Function for this analysis. The major focus and challenge relate to ridge extraction of the above time-frequency characteristics. It is well known that classical ridge extraction procedures lead to ridges that are smooth. However, this property is not desired when abrupt changes in the dynamics are considered. The methods presented in the paper are illustrated using simulated and experimental multi-degree-of-freedom systems. The results are compared with the classical Frequency Response Function and with the output only analysis based on the wavelet auto-power response spectrum. The results show that the proposed method captures correctly the dynamics of the analysed time-variant systems.

  16. Wavelet-based double-difference seismic tomography with sparsity regularization

    NASA Astrophysics Data System (ADS)

    Fang, Hongjian; Zhang, Haijiang

    2014-11-01

    We have developed a wavelet-based double-difference (DD) seismic tomography method. Instead of solving for the velocity model itself, the new method inverts for its wavelet coefficients in the wavelet domain. This method takes advantage of the multiscale property of the wavelet representation and solves the model at different scales. A sparsity constraint is applied to the inversion system to make the set of wavelet coefficients of the velocity model sparse. This considers the fact that the background velocity variation is generally smooth and the inversion proceeds in a multiscale way with larger scale features resolved first and finer scale features resolved later, which naturally leads to the sparsity of the wavelet coefficients of the model. The method is both data- and model-adaptive because wavelet coefficients are non-zero in the regions where the model changes abruptly when they are well sampled by ray paths and the model is resolved from coarser to finer scales. An iteratively reweighted least squares procedure is adopted to solve the inversion system with the sparsity regularization. A synthetic test for an idealized fault zone model shows that the new method can better resolve the discontinuous boundaries of the fault zone and the velocity values are also better recovered compared to the original DD tomography method that uses the first-order Tikhonov regularization.

  17. The Analysis of Surface EMG Signals with the Wavelet-Based Correlation Dimension Method

    PubMed Central

    Zhang, Yanyan; Wang, Jue

    2014-01-01

    Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240

  18. The analysis of surface EMG signals with the wavelet-based correlation dimension method.

    PubMed

    Wang, Gang; Zhang, Yanyan; Wang, Jue

    2014-01-01

    Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240

  19. Wavelet-based decomposition and analysis of structural patterns in astronomical images

    NASA Astrophysics Data System (ADS)

    Mertens, Florent; Lobanov, Andrei

    2015-02-01

    Context. Images of spatially resolved astrophysical objects contain a wealth of morphological and dynamical information, and effectively extracting this information is of paramount importance for understanding the physics and evolution of these objects. The algorithms and methods currently employed for this purpose (such as Gaussian model fitting) often use simplified approaches to describe the structure of resolved objects. Aims: Automated (unsupervised) methods for structure decomposition and tracking of structural patterns are needed for this purpose to be able to treat the complexity of structure and large amounts of data involved. Methods: We developed a new wavelet-based image segmentation and evaluation (WISE) method for multiscale decomposition, segmentation, and tracking of structural patterns in astronomical images. Results: The method was tested against simulated images of relativistic jets and applied to data from long-term monitoring of parsec-scale radio jets in 3C 273 and 3C 120. Working at its coarsest resolution, WISE reproduces the previous results of a model-fitting evaluation of the structure and kinematics in these jets exceptionally well. Extending the WISE structure analysis to fine scales provides the first robust measurements of two-dimensional velocity fields in these jets and indicates that the velocity fields probably reflect the evolution of Kelvin-Helmholtz instabilities that develop in the flow.

  20. Wavelet-Based ECG Steganography for Protecting Patient Confidential Information in Point-of-Care Systems.

    PubMed

    Ibaida, Ayman; Khalil, Ibrahim

    2013-12-01

    With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as point-of-care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by body sensor networks from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level, etc., and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data are being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet-based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the percentage residual difference and the wavelet weighted PRD. It is found that the proposed technique provides high-security protection for patients data with low (less than 1%) distortion and ECG data remain diagnosable after watermarking (i.e., hiding patient confidential data) and as well as after watermarks (i.e., hidden data) are removed from the watermarked data. PMID:23708767

  1. Dimensionality reduction for density ratio estimation in high-dimensional spaces.

    PubMed

    Sugiyama, Masashi; Kawanabe, Motoaki; Chui, Pui Ling

    2010-01-01

    The ratio of two probability density functions is becoming a quantity of interest these days in the machine learning and data mining communities since it can be used for various data processing tasks such as non-stationarity adaptation, outlier detection, and feature selection. Recently, several methods have been developed for directly estimating the density ratio without going through density estimation and were shown to work well in various practical problems. However, these methods still perform rather poorly when the dimensionality of the data domain is high. In this paper, we propose to incorporate a dimensionality reduction scheme into a density-ratio estimation procedure and experimentally show that the estimation accuracy in high-dimensional cases can be improved. PMID:19631506

  2. In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...

  3. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  4. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  5. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,

  6. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  7. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  8. Wavelet-based clustering of resting state MRI data in the rat.

    PubMed

    Medda, Alessio; Hoffmann, Lukas; Magnuson, Matthew; Thompson, Garth; Pan, Wen-Ju; Keilholz, Shella

    2016-01-01

    While functional connectivity has typically been calculated over the entire length of the scan (5-10min), interest has been growing in dynamic analysis methods that can detect changes in connectivity on the order of cognitive processes (seconds). Previous work with sliding window correlation has shown that changes in functional connectivity can be observed on these time scales in the awake human and in anesthetized animals. This exciting advance creates a need for improved approaches to characterize dynamic functional networks in the brain. Previous studies were performed using sliding window analysis on regions of interest defined based on anatomy or obtained from traditional steady-state analysis methods. The parcellation of the brain may therefore be suboptimal, and the characteristics of the time-varying connectivity between regions are dependent upon the length of the sliding window chosen. This manuscript describes an algorithm based on wavelet decomposition that allows data-driven clustering of voxels into functional regions based on temporal and spectral properties. Previous work has shown that different networks have characteristic frequency fingerprints, and the use of wavelets ensures that both the frequency and the timing of the BOLD fluctuations are considered during the clustering process. The method was applied to resting state data acquired from anesthetized rats, and the resulting clusters agreed well with known anatomical areas. Clusters were highly reproducible across subjects. Wavelet cross-correlation values between clusters from a single scan were significantly higher than the values from randomly matched clusters that shared no temporal information, indicating that wavelet-based analysis is sensitive to the relationship between areas. PMID:26481903

  9. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing

    PubMed Central

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.

    2014-01-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  10. Wavelet-based compression of medical images: filter-bank selection and evaluation.

    PubMed

    Saffor, A; bin Ramli, A R; Ng, K H

    2003-06-01

    Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

  11. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing.

    PubMed

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R

    2014-10-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  12. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  13. Sea ice density estimation in the Bohai Sea using the hyperspectral remote sensing technology

    NASA Astrophysics Data System (ADS)

    Liu, Chengyu; Shao, Honglan; Xie, Feng; Wang, Jianyu

    2014-11-01

    Sea ice density is one of the significant physical properties of sea ice and the input parameters in the estimation of the engineering mechanical strength and aerodynamic drag coefficients; also it is an important indicator of the ice age. The sea ice in the Bohai Sea is a solid, liquid and gas-phase mixture composed of pure ice, brine pockets and bubbles, the density of which is mainly affected by the amount of brine pockets and bubbles. The more the contained brine pockets, the greater the sea ice density; the more the contained bubbles, the smaller the sea ice density. The reflectance spectrum in 350~2500 nm and density of sea ice of different thickness and ages were measured in the Liaodong Bay of the Bohai Sea during the glacial maximum in the winter of 2012-2013. According to the measured sea ice density and reflectance spectrum, the characteristic bands that can reflect the sea ice density variation were found, and the sea ice density spectrum index (SIDSI) of the sea ice in the Bohai Sea was constructed. The inversion model of sea ice density in the Bohai Sea which refers to the layer from surface to the depth of penetration by the light was proposed at last. The sea ice density in the Bohai Sea was estimated using the proposed model from Hyperion image which is a hyperspectral image. The results show that the error of the sea ice density inversion model is about 0.0004 g•cm-3. The sea ice density can be estimated through hyperspectral remote sensing images, which provide the data support to the related marine science research and application.

  14. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. PMID:21682386

  15. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  16. Effect of compression paddle tilt correction on volumetric breast density estimation

    NASA Astrophysics Data System (ADS)

    Kallenberg, Michiel G. J.; van Gils, Carla H.; Lokate, Mariëtte; den Heeten, Gerard J.; Karssemeijer, Nico

    2012-08-01

    For the acquisition of a mammogram, a breast is compressed between a compression paddle and a support table. When compression is applied with a flexible compression paddle, the upper plate may be tilted, which results in variation in breast thickness from the chest wall to the breast margin. Paddle tilt has been recognized as a major problem in volumetric breast density estimation methods. In previous work, we developed a fully automatic method to correct the image for the effect of compression paddle tilt. In this study, we investigated in three experiments the effect of paddle tilt and its correction on volumetric breast density estimation. Results showed that paddle tilt considerably affected accuracy of volumetric breast density estimation, but that effect could be reduced by tilt correction. By applying tilt correction, a significant increase in correspondence between mammographic density estimates and measurements on MRI was established. We argue that in volumetric breast density estimation, tilt correction is both feasible and essential when mammographic images are acquired with a flexible compression paddle.

  17. An analytic model of toroidal half-wave oscillations: Implication on plasma density estimates

    NASA Astrophysics Data System (ADS)

    Bulusu, Jayashree; Sinha, A. K.; Vichare, Geeta

    2015-06-01

    The developed analytic model for toroidal oscillations under infinitely conducting ionosphere ("Rigid-end") has been extended to "Free-end" case when the conjugate ionospheres are infinitely resistive. The present direct analytic model (DAM) is the only analytic model that provides the field line structures of electric and magnetic field oscillations associated with the "Free-end" toroidal wave for generalized plasma distribution characterized by the power law ρ = ρo(ro/r)m, where m is the density index and r is the geocentric distance to the position of interest on the field line. This is important because different regions in the magnetosphere are characterized by different m. Significant improvement over standard WKB solution and an excellent agreement with the numerical exact solution (NES) affirms validity and advancement of DAM. In addition, we estimate the equatorial ion number density (assuming H+ atom as the only species) using DAM, NES, and standard WKB for Rigid-end as well as Free-end case and illustrate their respective implications in computing ion number density. It is seen that WKB method overestimates the equatorial ion density under Rigid-end condition and underestimates the same under Free-end condition. The density estimates through DAM are far more accurate than those computed through WKB. The earlier analytic estimates of ion number density were restricted to m = 6, whereas DAM can account for generalized m while reproducing the density for m = 6 as envisaged by earlier models.

  18. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  19. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  20. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  1. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level. PMID:26915200

  2. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  3. Trap array configuration influences estimates and precision of black bear density and abundance.

    PubMed

    Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

  4. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

  5. Mid-latitude Ionospheric Storms Density Gradients, Winds, and Drifts Estimated from GPS TEC Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.

    2012-12-01

    Ionospheric storm processes at mid-latitudes stand in stark contrast to the typical quiescent behavior. Storm enhanced density (SED) on the dayside affects continent-sized regions horizontally and are often associated with a plume that extends poleward and upward into the nightside. One proposed cause of this behavior is the sub-auroral polarization stream (SAPS) acting on the SED, and neutral wind effects. The electric field and its effect connecting mid-latitude and polar regions are just beginning to be understood and modeled. Another possible coupling effect is due to neutral winds, particularly those generated at high latitudes by joule heating effects. Of particular interest are electric fields and winds along the boundaries of the SED and plume, because these may be at least partly a cause of sharp horizontal electron density gradients. Thus, it is important to understand what bearing the drifts and winds, and any spatial variations in them (e.g., shear), have on the structure of the enhancement, particularly at its boundaries. Imaging techniques based on GPS TEC play a significant role in study of mid-latitude storm dynamics, particularly at mid-latitudes, where sampling of the ionosphere with ground-based GPS lines of sight is most dense. Ionospheric Data Assimilation 4-Dimensional (IDA4D) is a plasma density estimation algorithm that has been used in a number of scientific investigations over several years. Recently, efforts to estimate drivers of the mid-latitude ionosphere, focusing on electric-field-induced drifts and neutral winds, based on GPS TEC high-resolution imaging have shown promise. Estimating Ionospheric Parameters from Ionospheric Reverse Engineering (EMPIRE) is a tool developed that addresses this kind of investigation. In this work electron density and driver estimates are presented for an ionospheric storm using IDA4D in conjunction with EMPIRE. The IDA4D estimates resolve F-region electron densities at 1-degree resolution at the region of passage of the SED and associated plume. High-resolution imaging is used in conjunction with EMPIRE to deduce the dominant drivers. Starting with a baseline Weimer 2001 electric potential model, adjustments to the Weimer model are estimated for the given storm based on the IDA4D-derived densities to show electric fields associated with the plume. These regional densities and drivers are compared to CHAMP and DMSP data that are proximal for validation. Gradients in electron density are numerically computed over the 1-degree region. These density gradients are correlated with the drift estimates to identify a possible causal relationship in the formation of the boundaries of the SED.

  6. New density estimates of a threatened sifaka species (Propithecus coquereli) in Ankarafantsika National Park.

    PubMed

    Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès

    2014-06-01

    Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ∼100 ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ∼47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250

  7. Evidence of Temporal Variation of Titan Atmosphere Density Estimated Using Cassini Thruster Telemetry Data

    NASA Astrophysics Data System (ADS)

    Lee, A. Y.; Lim, R. S.

    2012-12-01

    One of the major science objectives of the Cassini mission is an investigation of Titan's atmosphere constituent abundances. During low-altitude Titan flyby's, the spacecraft attitude is controlled by eight reaction thrusters. Thrusters are fired to counter the torque imparted on the spacecraft due to the Titan atmosphere. The denser the Titan's atmosphere is, the higher are the duty cycles of the thruster firings. Therefore thruster firing telemetry data collected during a passage through the Titan atmosphere could be used to estimate the atmospheric torques imparted on the spacecraft. Since there is a known relation between the atmospheric torque imparted on the spacecraft and the Titan's atmospheric density, the estimated atmospheric torque were used to reconstruct the Titan atmospheric density. In 2004-2012, forty-six low-altitude Titan flybys were executed. The altitudes of these flybys at Titan Closest Approach (TCA) range from 878 to 1174 km. The estimated Titan atmospheric densities, as functions of the spacecraft's Titan-relative altitude, were reconstructed. Results obtained are compared with those measured by the HASI (Huygens Atmospheric Structure Instrument) instrument on the Huygens probe. When the logarithm of the estimated density is plotted against the corresponding altitude, the data sets produce straight lines with negative slopes. This suggests that the atmospheric density (ρ_Titan) is related to the altitude (h) as follows: ρ_Titan(h)=ρ0exp(-h/h0). In this equation, both ρ_Titan and ρ0 have units of kg/m3, and both h and h0 (scale height) have units of km. The least-square fit parameters [ρ0, h0] for the density estimates of forty-six low-altitude Titan flybys are given in this paper. There is an observed temporal variation of the Titan atmospheric density estimated using telemetry data of flyby executed in 2004-2012. The observed temporal variation of Titan atmospheric density is significant and couldn't be explained by the estimation uncertainty (5.8%, 1σ) of the density reconstruction methodology. For example, the estimated Titan atmospheric densities at a constant altitude of 1,080 km are 3.68, 2.58, 3.13, 1.86, 1.48, 2.07, and 1.48e-10 kg/m3 based on flyby data collected in the years 2005, 2006, 2007, 2008, 2009, 2010, and 2012, respectively. Note that the Titan atmosphere density first decreased with time from 2005 to 2009, then it increased with time from 2009 to 2012. Factors that contributed to this temporal variation are unknown. On the other hand, there isn't any noticeable dependency of the Titan atmospheric density with the TCA latitudes of the flybys (from 82 deg. South to 85 deg. North). The estimated atmospheric density data will help scientists to better understand the density structure of the Titan atmosphere.

  8. Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

    NASA Astrophysics Data System (ADS)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

  9. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.

  10. Multiscale seismic characterization of marine sediments by using a wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique

    2015-04-01

    We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lévy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare the results with the multiscale analysis performed on synthetic seismic data based on ground truth measurements.

  11. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  12. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    PubMed

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  13. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.

    2009-01-01

    1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

  14. Productivity and population density estimates of the dengue vector mosquito Aedes aegypti (Stegomyia aegypti) in Australia.

    PubMed

    Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

    2013-09-01

    New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. PMID:23205694

  15. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid

    PubMed Central

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  16. Hierarchical models for estimating density from DNA mark-recapture studies.

    PubMed

    Gardner, Beth; Royle, J Andrew; Wegan, Michael T

    2009-04-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps (e.g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS. PMID:19449704

  17. Estimating lung burdens based on individual particle density estimated from scanning electron microscopy and cascade impactor samples.

    PubMed

    Miller, Frederick J; Kaczmar, Swiatoslav W; Danzeisen, Ruth; Moss, Owen R

    2013-12-01

    Workplace air is monitored for overall dust levels and for specific components of the dust to determine compliance with occupational and workplace standards established by regulatory bodies for worker health protection. Exposure monitoring studies were conducted by the International Copper Association (ICA) at various industrial facilities around the world working with copper. Individual cascade impactor stages were weighed to determine the total amount of dust collected on the stage, and then the amounts of soluble and insoluble copper and other metals on each stage were determined; speciation was not determined. Filter samples were also collected for scanning electron microscope analysis. Retrospectively, there was an interest in obtaining estimates of alveolar lung burdens of copper in workers engaged in tasks requiring different levels of exertion as reflected by their minute ventilation. However, mechanistic lung dosimetry models estimate alveolar lung burdens based on particle Stoke's diameter. In order to use these dosimetry models the mass-based, aerodynamic diameter distribution (which was measured) had to be transformed into a distribution of Stoke's diameters, requiring an estimation be made of individual particle density. This density value was estimated by using cascade impactor data together with scanning electron microscopy data from filter samples. The developed method was applied to ICA monitoring data sets and then the multiple path particle dosimetry (MPPD) model was used to determine the copper alveolar lung burdens for workers with different functional residual capacities engaged in activities requiring a range of minute ventilation levels. PMID:24304308

  18. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  19. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    PubMed

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  20. Estimation of localized current anomalies in polymer electrolyte fuel cells from magnetic flux density measurements

    NASA Astrophysics Data System (ADS)

    Nara, Takaaki; Koike, Masanori; Ando, Shigeru; Gotoh, Yuji; Izumi, Masaaki

    2016-05-01

    In this paper, we propose novel inversion methods to estimate defects or localized current anomalies in membrane electrode assemblies (MEAs) in polymer electrolyte fuel cells (PEFCs). One method is an imaging approach with L1-norm regularization that is suitable for estimation of focal anomalies compared to Tikhonov regularization. The second is a complex analysis based method in which multiple pointwise current anomalies can be identified directly and algebraically from the measured magnetic flux density.

  1. Estimation of Density-Dependent Mortality of Juvenile Bivalves in the Wadden Sea

    PubMed Central

    Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

    2014-01-01

    We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

  2. A pseudo wavelet-based method for accurate tagline tracing on tagged MR images of the tongue

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaohui; Ozturk, Cengizhan; Chi-Fishman, Gloria

    2006-03-01

    In this paper, we present a pseudo wavelet-based tagline detection method. The tagged MR image is transformed to the wavelet domain, and the prominent tagline coefficients are retained while others are eliminated. Significant stripes are extracted via segmentation, which are mixtures of tags and anatomical boundary that resembles line. A refinement step follows such that broken lines or isolated points are grouped or eliminated. Without assumption on tag models, our method extracts taglines automatically regardless their width and spacing. In addition, founded on the multi-resolution wavelet analysis, our method reconstructs taglines precisely and shows great robustness to various types of taglines.

  3. Identification of the monitoring point density needed to reliably estimate contaminant mass fluxes

    NASA Astrophysics Data System (ADS)

    Liedl, R.; Liu, S.; Fraser, M.; Barker, J.

    2005-12-01

    Plume monitoring frequently relies on the evaluation of point-scale measurements of concentration at observation wells which are located at control planes or `fences' perpendicular to groundwater flow. Depth-specific concentration values are used to estimate the total mass flux of individual contaminants through the fence. Results of this approach, which is based on spatial interpolation, obviously depend on the density of the measurement points. Our contribution relates the accurracy of mass flux estimation to the point density and, in particular, allows to identify a minimum point density needed to achieve a specified accurracy. In order to establish this relationship, concentration data from fences installed in the coal tar creosote plume at the Borden site are used. These fences are characterized by a rather high density of about 7 points/m2 and it is reasonable to assume that the true mass flux is obtained with this point density. This mass flux is then compared with results for less dense grids down to about 0.1points/m2. Mass flux estimates obtained for this range of point densities are analyzed by the moving window method in order to reduce purely random fluctuations. For each position of the moving window the mass flux is estimated and the coefficient of variation (CV) is calculated to quantify variablity of the results. Thus, the CV provides a relative measure of accurracy in the estimated fluxes. By applying this approach to the Borden naphthalene plume at different times, it is found that the point density changes from sufficient to insufficient due to the temporally decreasing mass flux. By comparing the results of naphthalene and phenol at the same fence and at the same time, we can see that the same grid density might be sufficient for one compound but not for another. If a rather strict CV criterion of 5% is used, a grid of 7 points/m2 is shown to allow for reliable estimates of the true mass fluxes only in the beginning of plume development when mass fluxes are high. Long-term data exhibit a very high variation being attributed to the decreasing flux and a much denser grid would be required to reflect the decreasing mass flux with the same high accurracy. However, a less strict CV criterion of 50% may be acceptable due to uncertainties generally associated with other hydrogeologic parameters. In this case, a point density between 1 and 2 points/m2 is found to be sufficient for a set of five tested chemicals.

  4. Estimating food portions. Influence of unit number, meal type and energy density.

    PubMed

    Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A

    2013-12-01

    Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7-26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled "meal", 4 "drink" and 16 "snack". Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p=0.01). Single unit foods and those labelled as 'meal' or 'beverage' were estimated with greater error than multi-unit and 'snack' foods (p=0.02 and p<0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30-46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948

  5. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  6. USING AERIAL HYPERSPECTRAL REMOTE SENSING IMAGERY TO ESTIMATE CORN PLANT STAND DENSITY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Since corn plant stand density is important for optimizing crop yield, several researchers have recently developed ground-based systems for automatic measurement of this crop growth parameter. Our objective was to use data from such a system to assess the potential for estimation of corn plant stan...

  7. ESTIMATION OF SOYBEAN ROOT LENGTH DENSITY DISTRIBUTION WITH DIRECT AND SENSOR BASED MEASUREMENTS OF CLAYPAN MORPHOLOGY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...

  8. A hybrid approach to crowd density estimation using statistical leaning and texture classification

    NASA Astrophysics Data System (ADS)

    Li, Yin; Zhou, Bowen

    2013-12-01

    Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.

  9. Wavelet-based Time Series Bootstrap Approach for Multidecadal Hydrologic Projections Using Observed and Paleo Data of Climate Indicators

    NASA Astrophysics Data System (ADS)

    Erkyihun, S. T.

    2013-12-01

    Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.

  10. Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Lovelace, Geoffrey

    2001-12-01

    Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy, we assume a flat universe and that the weak energy condition is satisfied, and we allow the density of dark energy, ρX(z), to be an arbitrary function of redshift. Using simulated data from a space-based SN pencil-beam survey, we find that by optimizing the number of parameters used to parameterize the dimensionless dark energy density, f(z)=ρX(z)/ρX(z=0), we can obtain an unbiased estimate of both f(z) and the fractional matter density of the universe, Ωm. A plausible SN pencil-beam survey (with a square degree field of view and for an observational duration of 1 yr) can yield about 2000 SNe Ia with 0<=z<=2. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of σ(mint)=0.16 mag. We find that for such an idealized survey, Ωm can be measured to 10% accuracy, and the dark energy density can be estimated to ~20% to z~1.5, and ~20%-40% to z~2, depending on the time dependence of the true dark energy density. Dark energy densities that vary more slowly can be more accurately measured. For the anticipated Supernova/Acceleration Probe (SNAP) mission, Ωm can be measured to 14% accuracy, and the dark energy density can be estimated to ~20% to z~1.2. Our results suggest that SNAP may gain much sensitivity to the time dependence of the dark energy density and Ωm by devoting more observational time to the central pencil-beam fields to obtain more SNe Ia at z>1.2. We use both a maximum likelihood analysis and a Monte Carlo analysis (when appropriate) to determine the errors of estimated parameters. We find that the Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis.

  11. A likelihood approach to estimating animal density from binary acoustic transects.

    PubMed

    Horrocks, Julie; Hamilton, David C; Whitehead, Hal

    2011-09-01

    We propose an approximate maximum likelihood method for estimating animal density and abundance from binary passive acoustic transects, when both the probability of detection and the range of detection are unknown. The transect survey is purposely designed so that successive data points are dependent, and this dependence is exploited to simultaneously estimate density, range of detection, and probability of detection. The data are assumed to follow a homogeneous Poisson process in space, and a second-order Markov approximation to the likelihood is used. Simulations show that this method has small bias under the assumptions used to derive the likelihood, although it performs better when the probability of detection is close to 1. The effects of violations of these assumptions are also investigated, and the approach is found to be sensitive to spatial trends in density and clustering. The method is illustrated using real acoustic data from a survey of sperm and humpback whales. PMID:21039393

  12. Bioenergetics estimate of the effects of stocking density on hatchery production of smallmouth bass fingerlings

    USGS Publications Warehouse

    Robel, G.L.; Fisher, W.L.

    1999-01-01

    Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

  13. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  14. Quantitative analysis for breast density estimation in low dose chest CT scans.

    PubMed

    Moon, Woo Kyung; Lo, Chung-Ming; Goo, Jin Mo; Bae, Min Sun; Chang, Jung Min; Huang, Chiun-Sheng; Chen, Jeon-Hor; Ivanova, Violeta; Chang, Ruey-Feng

    2014-03-01

    A computational method was developed for the measurement of breast density using chest computed tomography (CT) images and the correlation between that and mammographic density. Sixty-nine asymptomatic Asian women (138 breasts) were studied. With the marked lung area and pectoralis muscle line in a template slice, demons algorithm was applied to the consecutive CT slices for automatically generating the defined breast area. The breast area was then analyzed using fuzzy c-mean clustering to separate fibroglandular tissue from fat tissues. The fibroglandular clusters obtained from all CT slices were summed then divided by the summation of the total breast area to calculate the percent density for CT. The results were compared with the density estimated from mammographic images. For CT breast density, the coefficient of variations of intraoperator and interoperator measurement were 3.00 % (0.59 %-8.52 %) and 3.09 % (0.20 %-6.98 %), respectively. Breast density measured from CT (22 ± 0.6 %) was lower than that of mammography (34 ± 1.9 %) with Pearson correlation coefficient of r=0.88. The results suggested that breast density measured from chest CT images correlated well with that from mammography. Reproducible 3D information on breast density can be obtained with the proposed CT-based quantification methods. PMID:24643751

  15. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  16. Surface estimates of the Atlantic overturning in density space in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Grist, Jeremy P.; Josey, Simon A.; Marsh, Robert

    2012-06-01

    A method to estimate the variability of the Atlantic meridional overturning circulation (AMOC) from surface observations is investigated using an eddy-permitting ocean-only model (ORCA-025). The approach is based on the estimate of dense water formation from surface density fluxes. Analysis using 78 years of two repeat forcing model runs reveals that the surface forcing-based estimate accounts for over 60% of the interannual AMOC variability in ?0 coordinates between 37N and 51N. The analysis provides correlations between surface-forced and actual overturning that exceed those obtained in an earlier analysis of a coarser resolution-coupled model. Our results indicate that, in accordance with theoretical considerations behind the method, it provides a better estimate of the overturning in density coordinates than in z coordinates in subpolar latitudes. By considering shorter segments of the model run, it is shown that correlations are particularly enhanced by the method's ability to capture large decadal scale AMOC fluctuations. The inclusion of the anomalous Ekman transport increases the amount of variance explained by an average 16% throughout the North Atlantic and provides the greatest potential for estimating the variability of the AMOC in density space between 33N and 54N. In that latitude range, 70-84% of the variance is explained and the root-mean-square difference is less than 1 Sv when the full run is considered.

  17. Population density estimated from locations of individuals on a passive detector array.

    PubMed

    Efford, Murray G; Dawson, Deanna K; Borchers, David L

    2009-10-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture-recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small. PMID:19886477

  18. Electron Density Profiles in the Ionospheric D-Region Estimated from MF Radio Wave Absorption

    NASA Astrophysics Data System (ADS)

    Nagano, I.; Okada, T.

    Electron density measurements in the lower ionosphere were carried out more than 6 times during the period from 1975 to 1992 by using sounding rockets launched at KSC (Kagoshima Space Center in Japan). Low electron densities were estimated from the absorption of the characteristic mode of ground-based radio signals (17.4 kHz and 873 kHz) in the lower ionosphere measured by onboard receivers. Two kind of methods, i.e., VLF mode absorption and MF absorption methods were developed to estimate the D-region electron density by comparing the observed wave intensity with that calculated by a full wave treatment. In this paper, both absorption methods are introduced paying attention to the capability of low electron density measurement. In particular the S-310-18 rocket experiment is discussed in detail, in which the D-region electron density profile derived from the altitude variation of MF radio wave intensity is presented. Finally the lower ionospheric electron density profiles so far measured by those method at mid-latitude in Japan are compared with those of the IRI-95 model

  19. Density estimation of small-mammal populations using a trapping web and distance sampling methods

    USGS Publications Warehouse

    Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.

    1983-01-01

    Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

  20. Estimating along-track plasma drift speed from electron density measurements by the three Swarm satellites

    NASA Astrophysics Data System (ADS)

    Park, J.; Lühr, H.; Stolle, C.; Malhotra, G.; Baker, J. B. H.; Buchert, S.; Gill, R.

    2015-07-01

    Plasma convection in the high-latitude ionosphere provides important information about magnetosphere-ionosphere-thermosphere coupling. In this study we estimate the along-track component of plasma convection within and around the polar cap, using electron density profiles measured by the three Swarm satellites. The velocity values estimated from the two different satellite pairs agree with each other. In both hemispheres the estimated velocity is generally anti-sunward, especially for higher speeds. The obtained velocity is in qualitative agreement with Super Dual Auroral Radar Network data. Our method can supplement currently available instruments for ionospheric plasma velocity measurements, especially in cases where these traditional instruments suffer from their inherent limitations. Also, the method can be generalized to other satellite constellations carrying electron density probes.

  1. Multivariate Granger causality: an estimation framework based on factorization of the spectral density matrix

    PubMed Central

    Wen, Xiaotong; Rangarajan, Govindan; Ding, Mingzhou

    2013-01-01

    Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix. PMID:23858479

  2. Estimations of population density for selected periods between the Neolithic and AD 1800.

    PubMed

    Zimmermann, Andreas; Hilpert, Johanna; Wendt, Karl Peter

    2009-04-01

    Abstract We describe a combination of methods applied to obtain reliable estimations of population density using archaeological data. The combination is based on a hierarchical model of scale levels. The necessary data and methods used to obtain the results are chosen so as to define transfer functions from one scale level to another. We apply our method to data sets from western Germany that cover early Neolithic, Iron Age, Roman, and Merovingian times as well as historical data from AD 1800. Error margins and natural and historical variability are discussed. Our results for nonstate societies are always lower than conventional estimations compiled from the literature, and we discuss the reasons for this finding. At the end, we compare the calculated local and global population densities with other estimations from different parts of the world. PMID:19943751

  3. Estimating effective data density in a satellite retrieval or an objective analysis

    NASA Technical Reports Server (NTRS)

    Purser, R. J.; Huang, H.-L.

    1993-01-01

    An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.

  4. Wavelet-based time-dependent travel time tomography method and its application in imaging the Etna volcano in Italy

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Zhang, Haijiang

    2015-10-01

    It has been a challenge to image velocity changes in real time by seismic travel time tomography. If more seismic events are included in the tomographic system, the inverted velocity models do not have necessary time resolution to resolve velocity changes. But if fewer events are used for real-time tomography, the system is less stable and the inverted model may contain some artifacts, and thus, resolved velocity changes may not be real. To mitigate these issues, we propose a wavelet-based time-dependent double-difference (DD) tomography method. The new method combines the multiscale property of wavelet representation and the fast converging property of the simultaneous algebraic reconstruction technique to solve the velocity models at multiple scales for sequential time segments. We first test the new method using synthetic data constructed using real event and station distribution for Mount Etna volcano in Italy. Then we show its effectiveness to determine velocity changes for the 2001 and 2002 eruptions of Mount Etna volcano. Compared to standard DD tomography that uses seismic events from a longer time period, wavelet-based time-dependent tomography better resolves velocity changes that may be caused by fracture closure and opening as well as fluid migration before and after volcano eruptions.

  5. Wavelet-based approaches for multiple hypothesis testing in activation mapping of functional magnetic resonance images of the human brain

    NASA Astrophysics Data System (ADS)

    Fadili, Jalal M.; Bullmore, Edward T.

    2003-11-01

    Wavelet-based methods for multiple hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasize convergence between methods of wavelet thresholding or shrinkage and the problem of multiple hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on ensuring a trade off between type I probability error control and power dissipation. We describe a technique for controlling the false discovery rate at an arbitrary level of type 1 error in testing multiple wavelet coefficients generated by a 2D discrete wavelet transform (DWT) of spatial maps of {fMRI} time series statistics. We also describe and apply recursive testing methods that can be used to define a threshold unique to each level and orientation of the 2D-DWT. Bayesian methods, incorporating a formal model for the anticipated sparseness of wavelet coefficients representing the signal or true image, are also tractable. These methods are comparatively evaluated by analysis of "null" images (acquired with the subject at rest), in which case the number of positive tests should be exactly as predicted under the hull hypothesis, and an experimental dataset acquired from 5 normal volunteers during an event-related finger movement task. We show that all three wavelet-based methods of multiple hypothesis testing have good type 1 error control (the FDR method being most conservative) and generate plausible brain activation maps.

  6. A secure distribution method for digitized image scan using a two-step wavelet-based technique: A Telemedicine Case.

    PubMed

    Yee Lau, Phooi; Ozawa, Shinji

    2005-01-01

    The objective of this paper is to present a secure distribution method to distribute healthcare records (e.g. video streams and digitized image scans). The availability of prompt and expert medical care can meaningfully improve health care services in understaffed rural and remote areas, sharing of available facilities, and medical records referral. Here, a secure method is developed for distributing healthcare records, using a two-step wavelet based technique; first, a 2-level db8 wavelets transform for textual elimination, and later a 4-level db8 wavelets transform for digital watermarking. The first db8 wavelets are used to detect and eliminate textual information found on images for protecting data privacy and confidentiality. The second db8 wavelets are to secure and impose imperceptible marks to identify the owner; track authorized users, or detects malicious tampering of documents. Experiments were performed on different digitized image scans. The experimental results have illustrated that both wavelet-based methods are conceptually simple and able to effectively detect textual information while our watermark technique is robust to noise and compression. PMID:17282675

  7. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

  8. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation.

    PubMed

    Mohseni, Hamid R; Kringelbach, Morten L; Woolrich, Mark W; Baker, Adam; Aziz, Tipu Z; Probert-Smith, Penny

    2014-02-15

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  9. Mammographic density and estimation of breast cancer risk in intermediate risk population.

    PubMed

    Tesic, Vanja; Kolaric, Branko; Znaor, Ariana; Kuna, Sanja Kusacic; Brkljacic, Boris

    2013-01-01

    It is not clear to what extent mammographic density represents a risk factor for breast cancer among women with moderate risk for disease. We conducted a population-based study to estimate the independent effect of breast density on breast cancer risk and to evaluate the potential of breast density as a marker of risk in an intermediate risk population. From November 2006 to April 2009, data that included American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) breast density categories and risk information were collected on 52,752 women aged 50-69 years without previously diagnosed breast cancer who underwent screening mammography examination. A total of 257 screen-detected breast cancers were identified. Logistic regression was used to assess the effect of breast density on breast carcinoma risk and to control for other risk factors. The risk increased with density and the odds ratio for breast cancer among women with dense breast (heterogeneously and extremely dense breast), was 1.9 (95% confidence interval, 1.3-2.8) compared with women with almost entirely fat breasts, after adjustment for age, body mass index, age at menarche, age at menopause, age at first childbirth, number of live births, use of oral contraceptive, family history of breast cancer, prior breast procedures, and hormone replacement therapy use that were all significantly related to breast density (p < 0.001). In multivariate model, breast cancer risk increased with age, body mass index, family history of breast cancer, prior breast procedure and breast density and decreased with number of live births. Our finding that mammographic density is an independent risk factor for breast cancer indicates the importance of breast density measurements for breast cancer risk assessment also in moderate risk populations. PMID:23173778

  10. A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars

    NASA Astrophysics Data System (ADS)

    Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong

    2016-04-01

    A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.

  11. Fracture density estimates in glaciogenic deposits from P-wave velocity reductions

    SciTech Connect

    Karaman, A.; Carpenter, P.J.

    1997-01-01

    Subsidence-induced fracturing of glaciogenic deposits over coal mines in the southern Illinois basis alters hydraulic properties of drift aquifers and exposes these aquifers to surface contaminants. In this study, refraction tomography surveys were used in conjunction with a generalized form of a seismic fracture density model to estimate the vertical and lateral extent of fracturing in a 12-m thick overburden of loess, clay, glacial till, and outwash above a longwall coal mine at 90 m depth. This generalized model accurately predicted fracture trends and densities from azimuthal P-wave velocity variations over unsaturated single- and dual-parallel fractures exposed at the surface. These fractures extended at least 6 m and exhibited 10--15 cm apertures at the surface. The pre- and postsubsidence velocity ratios were converted into fracture densities that exhibited qualitative agreement with the observed surface and inferred subsurface fracture distribution. Velocity reductions as large as 25% were imaged over the static tension zone of the mine where fracturing may extend to depths of 10--15 m. Finally, the seismically derived fracture density estimates were plotted as a function of subsidence-induced drawdown across the panel to estimate the average specific storage of the sand and gravel lower drift aquifer. This value was at least 20 times higher than the presubsidence (unfractured) specific storage for the same aquifer.

  12. Density estimation in a wolverine population using spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  13. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    SciTech Connect

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  14. Estimation of the density of the clay-organic complex in soil

    NASA Astrophysics Data System (ADS)

    Czyż, Ewa A.; Dexter, Anthony R.

    2016-01-01

    Soil bulk density was investigated as a function of soil contents of clay and organic matter in arable agricultural soils at a range of locations. The contents of clay and organic matter were used in an algorithmic procedure to calculate the amounts of clay-organic complex in the soils. Values of soil bulk density as a function of soil organic matter content were used to estimate the amount of pore space occupied by unit amount of complex. These estimations show that the effective density of the clay-organic matter complex is very low with a mean value of 0.17 ± 0.04 g ml-1 in arable soils. This value is much smaller than the soil bulk density and smaller than any of the other components of the soil considered separately (with the exception of the gas content). This low value suggests that the clay-soil complex has an extremely porous and open structure. When the complex is considered as a separate phase in soil, it can account for the observed reduction of bulk density with increasing content of organic matter.

  15. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  16. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  17. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

  18. Pyroclastic density current volume estimation after the 2010 Merapi volcano eruption using X-band SAR

    NASA Astrophysics Data System (ADS)

    Bignami, Christian; Ruch, Joel; Chini, Marco; Neri, Marco; Buongiorno, Maria Fabrizia; Hidayati, Sri; Sayudi, Dewi Sri; Surono

    2013-07-01

    Pyroclastic density current deposits remobilized by water during periods of heavy rainfall trigger lahars (volcanic mudflows) that affect inhabited areas at considerable distance from volcanoes, even years after an eruption. Here we present an innovative approach to detect and estimate the thickness and volume of pyroclastic density current (PDC) deposits as well as erosional versus depositional environments. We use SAR interferometry to compare an airborne digital surface model (DSM) acquired in 2004 to a post eruption 2010 DSM created using COSMO-SkyMed satellite data to estimate the volume of 2010 Merapi eruption PDC deposits along the Gendol river (Kali Gendol, KG). Results show PDC thicknesses of up to 75 m in canyons and a volume of about 40 × 106 m3, mainly along KG, and at distances of up to 16 km from the volcano summit. This volume estimate corresponds mainly to the 2010 pyroclastic deposits along the KG - material that is potentially available to produce lahars. Our volume estimate is approximately twice that estimated by field studies, a difference we consider acceptable given the uncertainties involved in both satellite- and field-based methods. Our technique can be used to rapidly evaluate volumes of PDC deposits at active volcanoes, in remote settings and where continuous activity may prevent field observations.

  19. Combining Breeding Bird Survey and distance sampling to estimate density of migrant and breeding birds

    USGS Publications Warehouse

    Somershoe, S.G.; Twedt, D.J.; Reid, B.

    2006-01-01

    We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

  20. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  1. Density estimation of small-mammal populations using a trapping web and distance sampling methods

    SciTech Connect

    Anderson, D.R.; Burnham, K.P.; White, G.C.; Otis, D.L.

    1983-01-01

    Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

  2. Estimation of the local density of states on a quantum computer

    SciTech Connect

    Emerson, Joseph; Cory, David; Lloyd, Seth; Poulin, David

    2004-05-01

    We report an efficient quantum algorithm for estimating the local density of states (LDOS) on a quantum computer. The LDOS describes the redistribution of energy levels of a quantum system under the influence of a perturbation. Sometimes known as the 'strength function' from nuclear spectroscopy experiments, the shape of the LDOS is directly related to the survivial probability of unperturbed eigenstates, and has recently been related to the fidelity decay (or 'Loschmidt echo') under imperfect motion reversal. For quantum systems that can be simulated efficiently on a quantum computer, the LDOS estimation algorithm enables an exponential speedup over direct classical computation.

  3. Moment series for moment estimators of the parameters of a Weibull density

    SciTech Connect

    Bowman, K.O.; Shenton, L.R.

    1982-01-01

    Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.

  4. A method for estimating the height of a mesospheric density level using meteor radar

    NASA Astrophysics Data System (ADS)

    Younger, J. P.; Reid, I. M.; Vincent, R. A.; Murphy, D. J.

    2015-07-01

    A new technique for determining the height of a constant density surface at altitudes of 78-85 km is presented. The first results are derived from a decade of observations by a meteor radar located at Davis Station in Antarctica and are compared with observations from the Microwave Limb Sounder instrument aboard the Aura satellite. The density of the neutral atmosphere in the mesosphere/lower thermosphere region around 70-110 km is an essential parameter for interpreting airglow-derived atmospheric temperatures, planning atmospheric entry maneuvers of returning spacecraft, and understanding the response of climate to different stimuli. This region is not well characterized, however, due to inaccessibility combined with a lack of consistent strong atmospheric radar scattering mechanisms. Recent advances in the analysis of detection records from high-performance meteor radars provide new opportunities to obtain atmospheric density estimates at high time resolutions in the MLT region using the durations and heights of faint radar echoes from meteor trails. Previous studies have indicated that the expected increase in underdense meteor radar echo decay times with decreasing altitude is reversed in the lower part of the meteor ablation region due to the neutralization of meteor plasma. The height at which the gradient of meteor echo decay times reverses is found to occur at a fixed atmospheric density. Thus, the gradient reversal height of meteor radar diffusion coefficient profiles can be used to infer the height of a constant density level, enabling the observation of mesospheric density variations using meteor radar.

  5. Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type

    NASA Astrophysics Data System (ADS)

    Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.

    2016-01-01

    Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international data sets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon type bulk densities using local known bulk density data sets. Then the best performing of the pedotransfer functions were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data were missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known data sets had the best predictions, using the recalibrated and validated pedotransfer functions.

  6. Electron density estimation in cold magnetospheric plasmas with the Cluster Active Archive

    NASA Astrophysics Data System (ADS)

    Masson, A.; Pedersen, A.; Taylor, M. G.; Escoubet, C. P.; Laakso, H. E.

    2009-12-01

    Electron density is a key physical quantity to characterize any plasma medium. Its measurement is thus essential to understand the various physical processes occurring in the environment of a magnetized planet. However, any magnetosphere of the solar system is far from being an homogeneous medium with a constant electron density and temperature. For instance, the Earth’s magnetosphere is composed of a variety of regions with densities and temperatures spanning over at least 6 decades of magnitude. For this reason, different types of scientific instruments are usually carried onboard a magnetospheric spacecraft to estimate in situ the electron density of the various plasma regions crossed by different means. In the case of the European Space Agency Cluster mission, five different instruments on each of its four identical spacecraft can be used to estimate it: two particle instruments, a DC electric field instrument, a relaxation sounder and a high-time resolution passive wave receiver. Each of these instruments has its pros and cons depending on the plasma conditions. The focus of this study is the accurate estimation of the electron density in cold plasma regions of the magnetosphere including the magnetotail lobes (Ne ≤ 0.01 e-/cc, Te ~ 100 eV) and the plasmasphere (Ne> 10 e-/cc, Te <10 eV). In these regions, particle instruments can be blind to low energy ions outflowing from the ionosphere or measuring only a portion of the energy range of the particles due to photoelectrons. This often results in an under estimation of the bulk density. Measurements from a relaxation sounder enables accurate estimation of the bulk electron density above a fraction of 1 e-/cc but requires careful calibration of the resonances and/or the cutoffs detected. On Cluster, active soundings enable to derive precise density estimates between 0.2 and 80 e-/cc every minute or two. Spacecraft-to-probe difference potential measurements from a double probe electric field experiment can be calibrated against the above mentionned types of measurements to derive bulk electron densities with a time resolution below 1 s. Such an in-flight calibration procedure has been performed successfully on past magnetospheric missions such as GEOS, ISEE-1, Viking, Geotail, CRRES or FAST. We will first present the outcome of this calibration procedure for the Cluster mission for plasma conditions encountered in the plasmasphere, the magnetotail lobes and the polar caps. This study is based on the use of the Cluster Active Archive (CAA) for data collected in the plasmasphere. CAA offers the unique possibility to easily access the best calibrated data collected by all experiments on the Cluster satellites over their several years in orbit. This has enabled in particular to take into account the impact of the solar activity in the calibration procedure. Recent science nuggets based on these calibrated data will then be presented showing in particular the outcome of the three dimensional (3D) electron density mapping of the magnetotail lobes over several years.

  7. Uncertainty quantification techniques for population density estimates derived from sparse open source data

    NASA Astrophysics Data System (ADS)

    Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

    2013-05-01

    The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  8. Georadar-derived estimates of firn density in the percolation zone, western Greenland ice sheet

    NASA Astrophysics Data System (ADS)

    Brown, Joel; Bradford, John; Harper, Joel; Pfeffer, W. Tad; Humphrey, Neil; Mosley-Thompson, Ellen

    2012-01-01

    Greater understanding of variations in firn densification is needed to distinguish between dynamic and melt-driven elevation changes on the Greenland ice sheet. This is especially true in Greenland's percolation zone, where firn density profiles are poorly documented because few ice cores are extracted in regions with surface melt. We used georadar to investigate firn density variations with depth along a ˜70 km transect through a portion of the accumulation area in western Greenland that partially melts. We estimated electromagnetic wave velocity by inverting reflection traveltimes picked from common midpoint gathers. We followed a procedure designed to find the simplest velocity versus depth model that describes the data within estimated uncertainty. On the basis of the velocities, we estimated 13 depth-density profiles of the upper 80 m using a petrophysical model based on the complex refractive index method equation. At the highest elevation site, our density profile is consistent with nearby core data acquired in the same year. Our profiles at the six highest elevation sites match an empirically based densification model for dry firn, indicating relatively minor amounts of water infiltration and densification by melt and refreeze in this higher region of the percolation zone. At the four lowest elevation sites our profiles reach ice densities at substantially shallower depths, implying considerable meltwater infiltration and ice layer development in this lower region of the percolation zone. The separation between these two regions is 8 km and spans 60 m of elevation, which suggests that the balance between dry-firn and melt-induced densification processes is sensitive to minor changes in melt.

  9. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    SciTech Connect

    Stewart, Robert N; White, Devin A; Urban, Marie L; Morton, April M; Webster, Clayton G; Stoyanov, Miroslav K; Bright, Eddie A; Bhaduri, Budhendra L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  10. Estimation of high-resolution dust column density maps. Empirical model fits

    NASA Astrophysics Data System (ADS)

    Juvela, M.; Montillaud, J.

    2013-09-01

    Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.

  11. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  12. Joint range-angle beamforming with application to estimation of target density functions

    NASA Astrophysics Data System (ADS)

    Emre, Erol

    1993-10-01

    A new technique is developed to achieve focusing in range and direction (angle) (joint range- angle beamforming) on the received signals simultaneously, transmitting one wave-beamform. It is shown how this can be utilized to estimate target density functions in the range-direction coordinates in three dimensional space. It is also shown that range-angle focusing can be achieved using only one sensor via transmitting several wave-beam forms.

  13. New Density Estimation Methods for Charged Particle Beams With Applications to Microbunching Instability

    SciTech Connect

    Balsa Terzic, Gabriele Bassi

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  14. Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.

    PubMed

    Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi

    2015-01-01

    Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challenging in high-dimensional space. A naive approach to coping with high dimensionality is to first perform dimensionality reduction (DR) and then execute CDE. However, a two-step process does not perform well in practice because the error incurred in the first DR step can be magnified in the second CDE step. In this letter, we propose a novel single-shot procedure that performs CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR as the problem of minimizing a squared-loss variant of conditional entropy, and this is solved using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the usefulness of the proposed method through extensive experiments on various data sets, including humanoid robot transition and computer art. PMID:25380340

  15. Examining the impact of the precision of address geocoding on estimated density of crime locations

    NASA Astrophysics Data System (ADS)

    Harada, Yutaka; Shimada, Takahito

    2006-10-01

    This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.

  16. Estimated carbon dioxide emissions from tropical deforestation improved by carbon-density maps

    NASA Astrophysics Data System (ADS)

    Baccini, A.; Goetz, S. J.; Walker, W. S.; Laporte, N. T.; Sun, M.; Sulla-Menashe, D.; Hackler, J.; Beck, P. S. A.; Dubayah, R.; Friedl, M. A.; Samanta, S.; Houghton, R. A.

    2012-03-01

    Deforestation contributes 6-17% of global anthropogenic CO2 emissions to the atmosphere. Large uncertainties in emission estimates arise from inadequate data on the carbon density of forests and the regional rates of deforestation. Consequently there is an urgent need for improved data sets that characterize the global distribution of aboveground biomass, especially in the tropics. Here we use multi-sensor satellite data to estimate aboveground live woody vegetation carbon density for pan-tropical ecosystems with unprecedented accuracy and spatial resolution. Results indicate that the total amount of carbon held in tropical woody vegetation is 228.7PgC, which is 21% higher than the amount reported in the Global Forest Resources Assessment 2010 (ref. ). At the national level, Brazil and Indonesia contain 35% of the total carbon stored in tropical forests and produce the largest emissions from forest loss. Combining estimates of aboveground carbon stocks with regional deforestation rates we estimate the total net emission of carbon from tropical deforestation and land use to be 1.0PgCyr-1 over the period 2000-2010--based on the carbon bookkeeping model. These new data sets of aboveground carbon stocks will enable tropical nations to meet their emissions reporting requirements (that is, United Nations Framework Convention on Climate Change Tier 3) with greater accuracy.

  17. [Nonparametric estimation of 1-dimensional continuous distribution density functions using the continuous LOLINREG approximation].

    PubMed

    Schmerling, S; Peil, J; Kupper, H

    1984-01-01

    A nonparametric method for estimation of one-dimensional continuous probability distribution functions is presented. Procedures for calculation of estimation of the unknown distribution function and the distribution density will be discussed in their application. 2 items are what type of weight function may be chosen for the proposed local-linear continuous approximation of the empirical distribution function by the least squares method (LOLINREG), and upon what value of bandwidth- or smoothing parameter one optimally should settle. The latter problem is practically very important with respect to the quality of the estimation results. Examples of simulated measurements which come from a standardized normal distribution as random numbers serve to demonstrate the mode of working, the advantages as well as the limits of the presented continuous LOLINREG-approximation. PMID:6530126

  18. Improved estimation of density of states for Monte Carlo sampling via MBAR.

    PubMed

    Xu, Yuanwei; Rodger, P Mark

    2015-10-13

    We present a new method to calculate the density of states using the multistate Bennett acceptance ratio (MBAR) estimator. We use a combination of parallel tempering (PT) and multicanonical simulation to demonstrate the efficiency of our method in a statistical model of sampling from a two-dimensional normal mixture and also in a physical model of aggregation of lattice polymers. While MBAR has been commonly used for final estimation of thermodynamic properties, our numerical results show that the efficiency of estimation with our new approach, which uses MBAR as an intermediate step, often improves upon conventional use of MBAR. We also demonstrate that it can be beneficial in our method to use full PT samples for MBAR calculations in cases where simulation data exhibit long correlation. PMID:26574248

  19. Estimation of scattering phase function utilizing laser Doppler power density spectra.

    PubMed

    Wojtkiewicz, S; Liebert, A; Rix, H; Sawosz, P; Maniewski, R

    2013-02-21

    A new method for the estimation of the light scattering phase function of particles is presented. The method allows us to measure the light scattering phase function of particles of any shape in the full angular range (0°-180°) and is based on the analysis of laser Doppler (LD) power density spectra. The theoretical background of the method and results of its validation using data from Monte Carlo simulations will be presented. For the estimation of the scattering phase function, a phantom measurement setup is proposed containing a LD measurement system and a simple model in which a liquid sample flows through a glass tube fixed in an optically turbid material. The scattering phase function estimation error was thoroughly investigated in relation to the light scattering anisotropy factor g. The error of g estimation is lower than 10% for anisotropy factors larger than 0.5 and decreases with increase of the anisotropy factor (e.g. for g = 0.98, the error of estimation is 0.01%). The analysis of influence of the noise in the measured LD spectrum showed that the g estimation error is lower than 1% for signal to noise ratio higher than 50 dB. PMID:23340453

  20. Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type

    NASA Astrophysics Data System (ADS)

    Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.

    2015-10-01

    Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international datasets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon types using local known bulk density datasets. Then the best performing of the pedotransfer functions, were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data was missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known datasets had the best predictions, using the recalibrated and validated pedotransfer functions.

  1. An efficient Legendre wavelet-based approximation method for a few Newell-Whitehead and Allen-Cahn equations.

    PubMed

    Hariharan, G

    2014-05-01

    In this paper, a wavelet-based approximation method is introduced for solving the Newell-Whitehead (NW) and Allen-Cahn (AC) equations. To the best of our knowledge, until now there is no rigorous Legendre wavelets solution has been reported for the NW and AC equations. The highest derivative in the differential equation is expanded into Legendre series, this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the aforesaid equations are converted into an algebraic system. Block pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24599524

  2. Wavelet-based multifractal analysis of earthquakes temporal distribution in Mammoth Mountain volcano, Mono County, Eastern California

    NASA Astrophysics Data System (ADS)

    Zamani, Ahmad; Kolahi Azar, Amir; Safavi, Ali

    2014-06-01

    This paper presents a wavelet-based multifractal approach to characterize the statistical properties of temporal distribution of the 1982-2012 seismic activity in Mammoth Mountain volcano. The fractal analysis of time-occurrence series of seismicity has been carried out in relation to seismic swarm in association with magmatic intrusion happening beneath the volcano on 4 May 1989. We used the wavelet transform modulus maxima based multifractal formalism to get the multifractal characteristics of seismicity before, during, and after the unrest. The results revealed that the earthquake sequences across the study area show time-scaling features. It is clearly perceived that the multifractal characteristics are not constant in different periods and there are differences among the seismicity sequences. The attributes of singularity spectrum have been utilized to determine the complexity of seismicity for each period. Findings show that the temporal distribution of earthquakes for swarm period was simpler with respect to pre- and post-swarm periods.

  3. Automatic quality control for wavelet-based compression of volumetric medical images using distortion-constrained adaptive vector quantization.

    PubMed

    Miaou, Shaou-Gang; Chen, Shih-Tse

    2004-11-01

    The enormous data of volumetric medical images (VMI) bring a transmission and storage problem that can be solved by using a compression technique. For the lossy compression of a very long VMI sequence, automatically maintaining the diagnosis features in reconstructed images is essential. The proposed wavelet-based adaptive vector quantizer incorporates a distortion-constrained codebook replenishment (DCCR) mechanism to meet a user-defined quality demand in peak signal-to-noise ratio. Combining a codebook updating strategy and the well-known set partitioning in hierarchical trees (SPIHT) technique, the DCCR mechanism provides an excellent coding gain. Experimental results show that the proposed approach is superior to the pure SPIHT and the JPEG2000 algorithms in terms of coding performance. We also propose an iterative fast searching algorithm to find the desired signal quality along an energy-quality curve instead of a traditional rate-distortion curve. The algorithm performs the quality control quickly, smoothly, and reliably. PMID:15554129

  4. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  5. A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

    NASA Astrophysics Data System (ADS)

    Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

    2014-09-01

    This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

  6. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    PubMed

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed. PMID:19586812

  7. Heterogeneous Occupancy and Density Estimates of the Pathogenic Fungus Batrachochytrium dendrobatidis in Waters of North America

    PubMed Central

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

  8. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    PubMed

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ∼3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

  9. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America

    USGS Publications Warehouse

    Chestnut, Tara E.; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time.

  10. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  11. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845

  12. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles

    PubMed Central

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845

  13. Detection and density estimation of goblet cells in confocal endoscopy for the evaluation of celiac disease.

    PubMed

    Boschetto, D; Mirzaei, H; Leong, R W L; Grisan, E

    2015-08-01

    Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic computer-based method for the detection of goblet cells from confocal endoscopy images, whose density changes in case of pathological tissue. After a manual selection of a suitable region of interest, the candidate columnar and goblet cells' centers are first detected and the cellular architecture is estimated from their position using a Voronoi diagram. The region within each Voronoi cell is then analyzed and classified as goblet cell or other. The results suggest that our method is able to detect and label goblet cells immersed in a columnar epithelium in a fast, reliable and automatic way. Accepting 0.44 false positives per image, we obtain a sensitivity value of 90.3%. Furthermore, estimated and real goblet cell densities are comparable (error: 9.7 16.9%, correlation: 87.2%, R(2) = 76%). PMID:26737720

  14. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    USGS Publications Warehouse

    Boersen, Mark R.; Clark, Joseph D.; King, Tim L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  15. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  16. Density-based load estimation using two-dimensional finite element models: a parametric study.

    PubMed

    Bona, Max A; Martin, Larry D; Fischer, Kenneth J

    2006-08-01

    A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530

  17. Hotspot Analysis of Spatial Environmental Pollutants Using Kernel Density Estimation and Geostatistical Techniques

    PubMed Central

    Lin, Yu-Pin; Chu, Hone-Jay; Wu, Chen-Fa; Chang, Tsun-Kuo; Chen, Chiu-Yang

    2011-01-01

    Concentrations of four heavy metals (Cr, Cu, Ni, and Zn) were measured at 1,082 sampling sites in Changhua county of central Taiwan. A hazard zone is defined in the study as a place where the content of each heavy metal exceeds the corresponding control standard. This study examines the use of spatial analysis for identifying multiple soil pollution hotspots in the study area. In a preliminary investigation, kernel density estimation (KDE) was a technique used for hotspot analysis of soil pollution from a set of observed occurrences of hazards. In addition, the study estimates the hazardous probability of each heavy metal using geostatistical techniques such as the sequential indicator simulation (SIS) and indicator kriging (IK). Results show that there are multiple hotspots for these four heavy metals and they are strongly correlated to the locations of industrial plants and irrigation systems in the study area. Moreover, the pollution hotspots detected using the KDE are the almost same to those estimated using IK or SIS. Soil pollution hotspots and polluted sampling densities are clearly defined using the KDE approach based on contaminated point data. Furthermore, the risk of hazards is explored by these techniques such as KDE and geostatistical approaches and the hotspot areas are captured without requiring exhaustive sampling anywhere. PMID:21318015

  18. Kernel density estimation-based real-time prediction for respiratory motion

    NASA Astrophysics Data System (ADS)

    Ruan, Dan

    2010-03-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the standard deviation of the observed data as the error metric. Furthermore, we compared the proposed method with two benchmark methods: most recent sample and an adaptive linear filter. The kernel density estimation-based prediction results demonstrate universally significant improvement over the alternatives and are especially valuable for long lookahead time, when the alternative methods fail to produce useful predictions.

  19. Estimating the neutrally buoyant energy density of a Rankine-cycle/fuel-cell underwater propulsion system

    NASA Astrophysics Data System (ADS)

    Waters, Daniel F.; Cadou, Christopher P.

    2014-02-01

    A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

  20. Comparison of Mars Atmospheric Density Estimates from Models to Measurements from Mars Global Surveyor (MGS) Data

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.

    2009-01-01

    A recent study (Desai, 2008) has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers (Spirit and Opportunity) and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing Desai's reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon entry, descent and landing. Desai's results have raised a question as to whether there is a systemic problem within Mars atmospheric models. Proposal is to compare Mars atmospheric density estimates from Mars atmospheric models to measurements made by Mars Global Surveyor (MGS). Comparison study requires the completion of several tasks that would result in a greater understanding of reasons behind the discrepancy found during recent landings on Mars and possible solutions to this problem.

  1. A maximum volume density estimator generalized over a proper motion-limited sample

    NASA Astrophysics Data System (ADS)

    Lam, Marco C.; Rowell, Nicholas; Hambly, Nigel C.

    2015-07-01

    The traditional Schmidt density estimator has been proven to be unbiased and effective in a magnitude-limited sample. Previously, efforts have been made to generalize it for populations with non-uniform density and proper motion-limited cases. This work shows that the then-good assumptions for a proper motion-limited sample are no longer sufficient to cope with modern data. Populations with larger differences in the kinematics as compared to the local standard of rest are most severely affected. We show that this systematic bias can be removed by treating the discovery fraction inseparable from the generalized maximum volume integrand. The treatment can be applied to any proper motion-limited sample with good knowledge of the kinematics. This work demonstrates the method through application to a mock catalogue of a white dwarf-only solar neighbourhood for various scenarios and compared against the traditional treatment using a survey with Pan-STARRS-like characteristics.

  2. Magnetic fields, plasma densities, and plasma beta parameters estimated from high-frequency zebra fine structures

    NASA Astrophysics Data System (ADS)

    Karlický, M.; Jiricka, K.

    2002-10-01

    Using the recent model of the radio zebra fine structures (Ledenev et al. 2001) the magnetic fields, plasma densities, and plasma beta parameters are estimated from high-frequency zebra fine structures. It was found that in the flare radio source of high-frequency (1-2 GHz) zebras the densities and magnetic fields vary in the intervals of (1-4)×1010 cm-3 and 40-230 G, respectively. Assuming then the flare temperature as about of 107K, the plasma beta parameters in the zebra radio sources are in the 0.05-0.81 interval. Thus the plasma pressure effects in such radio sources, especially in those with many zebra lines, are not negligible.

  3. Simple method to estimate MOS oxide-trap, interface-trap, and border-trap densities

    SciTech Connect

    Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R.

    1993-09-01

    Recent work has shown that near-interfacial oxide traps that communicates with the underlaying Si (``border traps``) can play a significant role in determining MOS radiation response and long-term reliability. Thermally-stimulated-current 1/f noise, and frequency-dependent charge-pumping measurements have been used to estimate border-trap densities in MOS structures. These methods all require high-precision, low-noise measurements that are often difficult to perform and interpret. In this summary, we describe a new dual-transistor method to separate bulk-oxide-trap, interface-trap, and border-trap densities in irradiated MOS transistors that requires only standard threshold-voltage and high-frequency charge-pumping measurements.

  4. New density estimation methods for charged particle beams with applications to microbunching instability

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Bassi, Gabriele

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  5. A reliable simple method to estimate density of nitroaliphatics, nitrate esters and nitramines.

    PubMed

    Keshavarz, Mohammad Hossein; Pouretedal, Hamid Reza

    2009-09-30

    In this work, a new simple method is presented to estimate crystal density of three important classes of explosives including nitroalphatics, nitrate esters and nitramines. This method allows reliable prediction of detonation performance for the above compounds. It uses a new general correlation containing important explosive parameters such as the number of carbon, hydrogen, nitrogen and two other structural parameters. The predicted results are compared to the results of best available methods for different family of energetic compounds. This method is also tested for various explosives with complex molecular structures. It is shown that the predicted results are more reliable with respect to the best well-developed simple methods. PMID:19442437

  6. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  7. Estimation of Tidal Stream Power Density in Narrow Passages and Setonaikai in Japan

    NASA Astrophysics Data System (ADS)

    Sengoku, Arata; Ishida, Yuzo; Sugio, Tsuyoshi

    Tidal stream power is renewable energy which is free from hydrocarbon energy source and CO2 emission. In Japan, we have several narrow passages with strong tidal current, more than 10 knots in some cases, and tidal stream power may be utilized in the future, which contributes to reduction of global warming gas. In this paper, we estimate distribution of tidal stream power density in several narrow passages and Setonaikai in Japan from amplitude of M2 and S2 constituents of tidal stream.

  8. Bayesian semiparametric power spectral density estimation with applications in gravitational wave data analysis

    NASA Astrophysics Data System (ADS)

    Edwards, Matthew C.; Meyer, Renate; Christensen, Nelson

    2015-09-01

    The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a blocked Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with nonstationary data by breaking longer data streams into smaller and locally stationary components.

  9. Validation tests of an improved kernel density estimation method for identifying disease clusters

    SciTech Connect

    Cai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L

    2011-01-01

    The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method that include a spatial basis of support designed to give a constant standard error for the standardized mortality/morbidity rate; a stair-case weight method for weighting observations to reduce estimation bias; and a method for selecting parameters to control three measures of performance of the method: sensitivity, specificity and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

  10. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  11. Exploring neural directed interactions with transfer entropy based on an adaptive kernel density estimator.

    PubMed

    Zuo, K; Bellanger, J J; Yang, C; Shu, H; Le Bouquin Jeannés, R

    2013-01-01

    This paper aims at estimating causal relationships between signals to detect flow propagation in autoregressive and physiological models. The main challenge of the ongoing work is to discover whether neural activity in a given structure of the brain influences activity in another area during epileptic seizures. This question refers to the concept of effective connectivity in neuroscience, i.e. to the identification of information flows and oriented propagation graphs. Past efforts to determine effective connectivity rooted to Wiener causality definition adapted in a practical form by Granger with autoregressive models. A number of studies argue against such a linear approach when nonlinear dynamics are suspected in the relationship between signals. Consequently, nonlinear nonparametric approaches, such as transfer entropy (TE), have been introduced to overcome linear methods limitations and promoted in many studies dealing with electrophysiological signals. Until now, even though many TE estimators have been developed, further improvement can be expected. In this paper, we investigate a new strategy by introducing an adaptive kernel density estimator to improve TE estimation. PMID:24110694

  12. Direct density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace search.

    PubMed

    Sugiyama, Masashi; Yamada, Makoto; von Bünau, Paul; Suzuki, Taiji; Kanamori, Takafumi; Kawanabe, Motoaki

    2011-03-01

    Methods for directly estimating the ratio of two probability density functions have been actively explored recently since they can be used for various data processing tasks such as non-stationarity adaptation, outlier detection, and feature selection. In this paper, we develop a new method which incorporates dimensionality reduction into a direct density-ratio estimation procedure. Our key idea is to find a low-dimensional subspace in which densities are significantly different and perform density-ratio estimation only in this subspace. The proposed method, D(3)-LHSS (Direct Density-ratio estimation with Dimensionality reduction via Least-squares Hetero-distributional Subspace Search), is shown to overcome the limitation of baseline methods. PMID:21059481

  13. A more appropriate white blood cell count for estimating malaria parasite density in Plasmodium vivax patients in northeastern Myanmar.

    PubMed

    Liu, Huaie; Feng, Guohua; Zeng, Weilin; Li, Xiaomei; Bai, Yao; Deng, Shuang; Ruan, Yonghua; Morris, James; Li, Siman; Yang, Zhaoqing; Cui, Liwang

    2016-04-01

    The conventional method of estimating parasite densities employ an assumption of 8000 white blood cells (WBCs)/μl. However, due to leucopenia in malaria patients, this number appears to overestimate parasite densities. In this study, we assessed the accuracy of parasite density estimated using this assumed WBC count in eastern Myanmar, where Plasmodium vivax has become increasingly prevalent. From 256 patients with uncomplicated P. vivax malaria, we estimated parasite density and counted WBCs by using an automated blood cell counter. It was found that WBC counts were not significantly different between patients of different gender, axillary temperature, and body mass index levels, whereas they were significantly different between age groups of patients and the time points of measurement. The median parasite densities calculated with the actual WBC counts (1903/μl) and the assumed WBC count of 8000/μl (2570/μl) were significantly different. We demonstrated that using the assumed WBC count of 8000 cells/μl to estimate parasite densities of P. vivax malaria patients in this area would lead to an overestimation. For P. vivax patients aged five years and older, an assumed WBC count of 5500/μl best estimated parasite densities. This study provides more realistic assumed WBC counts for estimating parasite densities in P. vivax patients from low-endemicity areas of Southeast Asia. PMID:26802490

  14. Similarities between line fishing and baited stereo-video estimations of length-frequency: novel application of Kernel Density Estimates.

    PubMed

    Langlois, Timothy J; Fitzpatrick, Benjamin R; Fairclough, David V; Wakefield, Corey B; Hesp, S Alex; McLean, Dianne L; Harvey, Euan S; Meeuwig, Jessica J

    2012-01-01

    Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov-Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID:23209547

  15. HIRDLS observations of global gravity wave absolute momentum fluxes: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    John, Sherine Rachel; Kishore Kumar, Karanam

    2016-02-01

    Using wavelet technique for detection of height varying vertical and horizontal wavelengths of gravity waves, the absolute values of gravity wave momentum fluxes are estimated from High Resolution Dynamics Limb Sounder (HIRDLS) temperature measurements. Two years of temperature measurements (2005 December-2007 November) from HIRDLS onboard EOS-Aura satellite over the globe are used for this purpose. The least square fitting method is employed to extract the 0-6 zonal wavenumber planetary wave amplitudes, which are removed from the instantaneous temperature profiles to extract gravity wave fields. The vertical and horizontal wavelengths of the prominent waves are computed using wavelet and cross correlation techniques respectively. The absolute momentum fluxes are then estimated using prominent gravity wave perturbations and their vertical and horizontal wavelengths. The momentum fluxes obtained from HIRDLS are compared with the fluxes obtained from ground based Rayleigh LIDAR observations over a low latitude station, Gadanki (13.5°N, 79.2°E) and are found to be in good agreement. After validation, the absolute gravity wave momentum fluxes over the entire globe are estimated. It is found that the winter hemisphere has the maximum momentum flux magnitudes over the high latitudes with a secondary maximum over the summer hemispheric low-latitudes. The significance of the present study lies in introducing the wavelet technique for estimating the height varying vertical and horizontal wavelengths of gravity waves and validating space based momentum flux estimations using ground based lidar observations.

  16. Accuracy of estimated geometric parameters of trees depending on the LIDAR data density

    NASA Astrophysics Data System (ADS)

    Hadas, Edyta; Estornell, Javier

    2015-04-01

    The estimation of dendrometric variables has become important for spatial planning and agriculture projects. Because classical field measurements are time consuming and inefficient, airborne LiDAR (Light Detection and Ranging) measurements are successfully used in this area. Point clouds acquired for relatively large areas allows to determine the structure of forestry and agriculture areas and geometrical parameters of individual trees. In this study two LiDAR datasets with different densities were used: sparse with average density of 0.5pt/m2 and the dense with density of 4pt/m2. 25 olive trees were selected and field measurements of tree height, crown bottom height, length of crown diameters and tree position were performed. To determine the tree geometric parameters from LiDAR data, two independent strategies were developed that utilize the ArcGIS, ENVI and FUSION software. Strategy a) was based on canopy surface model (CSM) slicing at 0.5m height and in strategy b) minimum bounding polygons as tree crown area were created around detected tree centroid. The individual steps were developed to be applied also in automatic processing. To assess the performance of each strategy with both point clouds, the differences between the measured and estimated geometric parameters of trees were analyzed. As expected, the tree height were underestimated for both strategies (RMSE=0.7m for dense dataset and RMSE=1.5m for sparse) and tree crown height were overestimated (RMSE=0.4m and RMSE=0.7m for dense and sparse dataset respectively). For dense dataset, strategy b) allows to determine more accurate crown diameters (RMSE=0.5m) than strategy a) (RMSE=0.8m), and for sparse dataset, only strategy a) occurs to be relevant (RMSE=1.0m). The accuracy of strategies were also examined for their dependency on tree size. For dense dataset, the larger the tree (height or crown longer diameter), the higher was the error of estimated tree height, and for sparse dataset, the larger the tree, the higher was the error of estimated crown bottom height. Finally, the spatial distribution of points inside the tree crown was analyzed, by creating a normalized tree crown. It confirms a high concentration of LiDAR points inside the central part of a tree.

  17. Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease.

    PubMed

    Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

    2014-01-01

    Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

  18. Local diagnostics to estimate density-induced sea level variations over topography and along coastlines

    NASA Astrophysics Data System (ADS)

    Bingham, R. J.; Hughes, C. W.

    2012-01-01

    In the open ocean, sea level variability is primarily steric in origin. Steric sea level is given by the depth integral of the density field, raising the question of how tide gauges, which are situated in very shallow water, feel deep ocean variability. Here this question is examined in a high-resolution global ocean model. By considering a series of assumptions we show that if we wish to reconstruct coastal sea level using only local density information, then the best assumption we can make is one of no horizontal pressure gradient, and therefore no geostrophic flow, at the seafloor. Coastal sea level can then be determined using density at the ocean's floor. When attempting to discriminate between mass and volume components of sea level measured by tide gauges, the conventional approach is to take steric height at deep-ocean sites close to the tide gauges as an estimate of the steric component. We find that with steric height computed at 3000 m this approach only works well in the equatorial band of the Atlantic and Pacific eastern boundaries. In most cases the steric correction can be improved by calculating steric height closer to shore, with the best results obtained in the depth range 500-1000 m. Yet, for western boundaries, large discrepancies remain. Our results therefore suggest that on time scales up to about 5 years, and perhaps longer, the presence of boundary currents means that the conventional steric correction to tide gauges may not be valid in many places.

  19. Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

    2012-12-01

    We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.

    PubMed

    Dorazio, Robert M

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325

  1. A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti

    PubMed Central

    Villela, Daniel A. M.; Codeço, Claudia T.; Figueiredo, Felipe; Garcia, Gabriela A.; Maciel-de-Freitas, Rafael; Struchiner, Claudio J.

    2015-01-01

    Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals’ refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80–1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3–5.9). The hierarchical model also performed better than the commonly used Fisher-Ford’s method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations. PMID:25906323

  2. Integration of Self-Organizing Map (SOM) and Kernel Density Estimation (KDE) for network intrusion detection

    NASA Astrophysics Data System (ADS)

    Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

    2009-09-01

    This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

  3. Bayes and Empirical Bayes Estimators of Abundance and Density from Spatial Capture-Recapture Data

    PubMed Central

    Dorazio, Robert M.

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325

  4. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

    USGS Publications Warehouse

    Dorazio, Robert M.

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

  5. A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti.

    PubMed

    Villela, Daniel A M; Codeço, Claudia T; Figueiredo, Felipe; Garcia, Gabriela A; Maciel-de-Freitas, Rafael; Struchiner, Claudio J

    2015-01-01

    Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals' refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80-1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3-5.9). The hierarchical model also performed better than the commonly used Fisher-Ford's method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations. PMID:25906323

  6. Estimate of density-of-states changes with strain in A15 Nb3Sn superconductors

    NASA Astrophysics Data System (ADS)

    Qiao, Li; Yang, Lin; Song, Jie

    2015-07-01

    The experimental datasets are analyzed which show that the bare density of states N(EF) changes dramatically, as does the superconducting transition temperature Tc, in Nb3Sn samples which are strained in different states and levels. By taking into account the strain induced change in the electron-phonon coupling strength, the density of states as function of strain is estimated via a formula deduced from the strong-coupling modifications to the theory of type-II superconductivity. The results of the analysis indicate that (i) as the Nb3Sn material undergoes external axial strain ɛ, the value of N(EF) decreases 15% as Tc varies from ∼17.4 to ∼16.6 K; (ii) the N(EF)-ɛ curve exhibits a changing asymmetry of shape, in qualitative agreement with a recent first principle calculations; (iii) the relationship between the density of states and the superconducting transition temperature in strained A15 Nb3Sn strands shows significant difference between tensile and compression loads, while for the trend of the strain-induced drop in electron-phonon coupling strength versus Tc of distorted Nb3Sn sample under different stress conditions, the curves show consistency in a wide strain range. A general model for characterizing the effect of strain states on the N(EF) in A15 Nb3Sn superconductors is suggested, and the density of states behavior in different modes of deformation can be well described with the modeling formalism. The present results are useful in order to understand the origin of the strain sensitivity of the superconducting properties of A15 Nb3Sn superconductor, and develop a comprehensive theory describing the strain tensor-dependent superconducting behavior of A15 Nb3Sn strands.

  7. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  8. Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.

    PubMed

    Gutiérrez, David; Ramírez-Moreno, Mauricio A

    2016-04-01

    We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding. PMID:27066154

  9. Stochastic estimation of nuclear level density in the nuclear shell model: An application to parity-dependent level density in 58Ni

    NASA Astrophysics Data System (ADS)

    Shimizu, Noritaka; Utsuno, Yutaka; Futamura, Yasunori; Sakurai, Tetsuya; Mizusaki, Takahiro; Otsuka, Takaharu

    2016-02-01

    We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of Jπ =2+ and 2- states in 58Ni in a unified manner.

  10. Coronal electron density distributions estimated from CMEs, DH type II radio bursts, and polarized brightness measurements

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Ok; Moon, Y.-J.; Lee, Jin-Yi; Lee, Kyoung-Sun; Kim, R.-S.

    2016-04-01

    We determine coronal electron density distributions (CEDDs) by analyzing decahectometric (DH) type II observations under two assumptions. DH type II bursts are generated by either (1) shocks at the leading edges of coronal mass ejections (CMEs) or (2) CME shock-streamer interactions. Among 399 Wind/WAVES type II bursts (from 1997 to 2012) associated with SOHO/LASCO (Large Angle Spectroscopic COronagraph) CMEs, we select 11 limb events whose fundamental and second harmonic emission lanes are well identified. We determine the lowest frequencies of fundamental emission lanes and the heights of leading edges of their associated CMEs. We also determine the heights of CME shock-streamer interaction regions. The CEDDs are estimated by minimizing the root-mean-square error between the heights from the CME leading edges (or CME shock-streamer interaction regions) and DH type II bursts. We also estimate CEDDs of seven events using polarized brightness (pB) measurements. We find the following results. Under the first assumption, the average of estimated CEDDs from 3 to 20 Rs is about 5-fold Saito's model (NSaito(r)). Under the second assumption, the average of estimated CEDDs from 3 to 10 Rs is 1.5-fold NSaito(r). While the CEDDs obtained from pB measurements are significantly smaller than those based on the first assumption and CME flank regions without streamers, they are well consistent with those on the second assumption. Our results show that not only about 1-fold NSaito(r) is a proper CEDD for analyzing DH type II bursts but also CME shock-streamer interactions could be a plausible origin for generating DH type II bursts.

  11. Estimating respiratory and heart rates from the correntropy spectral density of the photoplethysmogram.

    PubMed

    Garde, Ainara; Karlen, Walter; Ansermino, J Mark; Dumont, Guy A

    2014-01-01

    The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088

  12. Estimating Respiratory and Heart Rates from the Correntropy Spectral Density of the Photoplethysmogram

    PubMed Central

    Garde, Ainara; Karlen, Walter; Ansermino, J. Mark; Dumont, Guy A.

    2014-01-01

    The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088

  13. Local Wegner and Lifshitz tails estimates for the density of states for continuous random Schrödinger operators

    NASA Astrophysics Data System (ADS)

    Combes, Jean-Michel; Germinet, François; Klein, Abel

    2014-08-01

    We introduce and prove local Wegner estimates for continuous generalized Anderson Hamiltonians, where the single-site random variables are independent but not necessarily identically distributed. In particular, we get Wegner estimates with a constant that goes to zero as we approach the bottom of the spectrum. As an application, we show that the (differentiated) density of states exhibits the same Lifshitz tails upper bound as the integrated density of states.

  14. Estimation of effective scatterer size and number density in near-infrared tomography

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2007-05-01

    Light scattering from tissue originates from the fluctuations in intra-cellular and extra-cellular components, so it is possible that macroscopic scattering spectroscopy could be used to quantify sub-microscopic structures. Both electron microscopy (EM) and optical phase contrast microscopy were used to study the origin of scattering from tissue. EM studies indicate that lipid-bound particle sizes appear to be distributed as a monotonic exponential function, with sub-micron structures dominating the distribution. Given assumptions about the index of refraction change, the shape of the scattering spectrum in the near infrared as measured through bulk tissue is consistent with what would be predicted by Mie theory with these particle size histograms. The relative scattering intensity of breast tissue sections (including 10 normal & 23 abnormal) were studied by phase contrast microscopy. Results show that stroma has higher scattering than epithelium tissue, and fat has the lowest values; tumor epithelium has lower scattering than the normal epithelium; stroma associated with tumor has lower scattering than the normal stroma. Mie theory estimation scattering spectra, was used to estimate effective particle size values, and this was applied retrospectively to normal whole breast spectra accumulated in ongoing clinical exams. The effective sizes ranged between 20 and 1400 nm, which are consistent with subcellular organelles and collagen matrix fibrils discussed previously. This estimation method was also applied to images from cancer regions, with results indicating that the effective scatterer sizes of region of interest (ROI) are pretty close to that of the background for both the cancer patients and benign patients; for the effective number density, there is a big difference between the ROI and background for the cancer patients, while for the benign patients, the value of ROI are relatively close to that of the background. Ongoing MRI-guided NIR studies indicated that the fibroglandular tissue had smaller effective scatterer size and larger effective number density than the adipose tissue. The studies in this thesis provide an interpretive approach to estimate average morphological scatter parameters of bulk tissue, through interpretation of diffuse scattering as coming from effective Mie scatterers.

  15. Simulation of Electron Cloud Density Distributions in RHIC Dipoles at Injection and Transition and Estimates for Scrubbing Times

    SciTech Connect

    He,P.; Blaskiewicz, M.; Fischer, W.

    2009-01-02

    In this report we summarize electron-cloud simulations for the RHIC dipole regions at injection and transition to estimate if scrubbing over practical time scales at injection would reduce the electron cloud density at transition to significantly lower values. The lower electron cloud density at transition will allow for an increase in the ion intensity.

  16. Daniell method for power spectral density estimation in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Labuda, Aleksander

    2016-03-01

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to a more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.

  17. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  18. Detection of Dendritic Spines Using Wavelet-Based Conditional Symmetric Analysis and Regularized Morphological Shared-Weight Neural Networks

    PubMed Central

    Wang, Shuihua; Chen, Mengmeng; Li, Yang; Zhang, Yudong; Han, Liangxiu; Wu, Jane; Du, Sidan

    2015-01-01

    Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer's disease, Parkinson's diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines. PMID:26692046

  19. Analysis of hydrological trend for radioactivity content in bore-hole water samples using wavelet based denoising.

    PubMed

    Paul, Sabyasachi; Suman, V; Sarkar, P K; Ranade, A K; Pulhani, V; Dafauti, S; Datta, D

    2013-08-01

    A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in (137)Cs and (90)Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann-Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements. PMID:23524202

  20. Density estimates of Panamanian owl monkeys (Aotus zonalis) in three habitat types.

    PubMed

    Svensson, Magdalena S; Samudio, Rafael; Bearder, Simon K; Nekaris, K Anne-Isola

    2010-02-01

    The resolution of the ambiguity surrounding the taxonomy of Aotus means data on newly classified species are urgently needed for conservation efforts. We conducted a study on the Panamanian owl monkey (Aotus zonalis) between May and July 2008 at three localities in Chagres National Park, located east of the Panama Canal, using the line transect method to quantify abundance and distribution. Vegetation surveys were also conducted to provide a baseline quantification of the three habitat types. We observed 33 individuals within 16 groups in two out of the three sites. Population density was highest in Campo Chagres with 19.7 individuals/km(2) and intermediate densities of 14.3 individuals/km(2) were observed at Cerro Azul. In la Llana A. zonalis was not found to be present. The presence of A. zonalis in Chagres National Park, albeit at seemingly low abundance, is encouraging. A longer-term study will be necessary to validate the further abundance estimates gained in this pilot study in order to make conservation policy decisions. PMID:19852005

  1. Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion

    NASA Technical Reports Server (NTRS)

    Wilson, L.; Self, S.

    1980-01-01

    Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.

  2. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    SciTech Connect

    Bandyopadhyay, M. Sudhir, Dass Chakraborty, A.

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.

  3. Classification of motor imagery by means of cortical current density estimation and Von Neumann entropy

    NASA Astrophysics Data System (ADS)

    Kamousi, Baharan; Nasiri Amini, Ali; He, Bin

    2007-06-01

    The goal of the present study is to employ the source imaging methods such as cortical current density estimation for the classification of left- and right-hand motor imagery tasks, which may be used for brain-computer interface (BCI) applications. The scalp recorded EEG was first preprocessed by surface Laplacian filtering, time-frequency filtering, noise normalization and independent component analysis. Then the cortical imaging technique was used to solve the EEG inverse problem. Cortical current density distributions of left and right trials were classified from each other by exploiting the concept of Von Neumann entropy. The proposed method was tested on three human subjects (180 trials each) and a maximum accuracy of 91.5% and an average accuracy of 88% were obtained. The present results confirm the hypothesis that source analysis methods may improve accuracy for classification of motor imagery tasks. The present promising results using source analysis for classification of motor imagery enhances our ability of performing source analysis from single trial EEG data recorded on the scalp, and may have applications to improved BCI systems.

  4. 3D density estimation in digital breast tomosynthesis: application to needle path planning for breast biopsy

    NASA Astrophysics Data System (ADS)

    Vancamberg, Laurence; Geeraert, Nausikaa; Iordache, Razvan; Palma, Giovanni; Klausz, Rémy; Muller, Serge

    2011-03-01

    Needle insertion planning for digital breast tomosynthesis (DBT) guided biopsy has the potential to improve patient comfort and intervention safety. However, a relevant planning should take into account breast tissue deformation and lesion displacement during the procedure. Deformable models, like finite elements, use the elastic characteristics of the breast to evaluate the deformation of tissue during needle insertion. This paper presents a novel approach to locally estimate the Young's modulus of the breast tissue directly from the DBT data. The method consists in computing the fibroglandular percentage in each of the acquired DBT projection images, then reconstructing the density volume. Finally, this density information is used to compute the mechanical parameters for each finite element of the deformable mesh, obtaining a heterogeneous DBT based breast model. Preliminary experiments were performed to evaluate the relevance of this method for needle path planning in DBT guided biopsy. The results show that the heterogeneous DBT based breast model improves needle insertion simulation accuracy in 71% of the cases, compared to a homogeneous model or a binary fat/fibroglandular tissue model.

  5. Density estimation in aerial images of large crowds for automatic people counting

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Metzler, Juergen

    2013-05-01

    Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

  6. Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation

    PubMed Central

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  7. New Estimates on the EKB Dust Density using the Student Dust Counter

    NASA Astrophysics Data System (ADS)

    Szalay, J.; Horanyi, M.; Poppe, A. R.

    2013-12-01

    The Student Dust Counter (SDC) is an impact dust detector on board the New Horizons Mission to Pluto. SDC was designed to resolve the mass of dust grains in the range of 10^-12 < m < 10^-9 g, covering an approximate size range of 0.5-10 um in particle radius. The measurements can be directly compared to the prediction of a grain tracing trajectory model of dust originating from the Edgeworth-Kuiper Belt. SDC's results as well as data taken by the Pioneer 10 dust detector are compared to our model to derive estimates for the mass production rate and the ejecta mass distribution power law exponent. Contrary to previous studies, the assumption that all impacts are generated by grains on circular Keplerian orbits is removed, allowing for a more accurate determination of the EKB mass production rate. With these estimates, the speed and mass distribution of EKB grains entering atmospheres of outer solar system bodies can be calculated. Through December 2013, the New Horizons spacecraft reached approximately 28 AU, enabling SDC to map the dust density distribution of the solar system farther than any previous dust detector.

  8. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  9. Using kernel density estimation to understand the influence of neighbourhood destinations on BMI

    PubMed Central

    King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M

    2016-01-01

    Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity. PMID:26883235

  10. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  11. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  12. A wavelet-based metric for visual texture discrimination with applications in evolutionary ecology.

    PubMed

    Kiltie, R A; Fan, J; Laine, A F

    1995-03-01

    Much work on natural and sexual selection is concerned with the conspicuousness of visual patterns (textures) on animal and plant surfaces. Previous attempts by evolutionary biologists to quantify apparency of such textures have involved subjective estimates of conspicuousness or statistical analyses based on transect samples. We present a method based on wavelet analysis that avoids subjectivity and that uses more of the information in image textures than transects do. Like the human visual system for texture discrimination, and probably like that of other vertebrates, this method is based on localized analysis of orientation and frequency components of the patterns composing visual textures. As examples of the metric's utility, we present analyses of crypsis for tigers, zebras, and peppered moth morphs. PMID:7696817

  13. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  14. A Wavelet-based Seismogram Inversion Algorithm for the In Situ Characterization of Nonlinear Soil Behavior

    NASA Astrophysics Data System (ADS)

    Assimaki, D.; Li, W.; Kalos, A.

    2011-10-01

    We present a full waveform inversion algorithm of downhole array seismogram recordings that can be used to estimate the inelastic soil behavior in situ during earthquake ground motion. For this purpose, we first develop a new hysteretic scheme that improves upon existing nonlinear site response models by allowing adjustment of the width and length of the hysteresis loop for a relatively small number of soil parameters. The constitutive law is formulated to approximate the response of saturated cohesive materials, and does not account for volumetric changes due to shear leading to pore pressure development and potential liquefaction. We implement the soil model in the forward operator of the inversion, and evaluate the constitutive parameters that maximize the cross-correlation between site response predictions and observations on ground surface. The objective function is defined in the wavelet domain, which allows equal weight to be assigned across all frequency bands of the non-stationary signal. We evaluate the convergence rate and robustness of the proposed scheme for noise-free and noise-contaminated data, and illustrate good performance of the inversion for signal-to-noise ratios as low as 3. We finally employ the proposed scheme to downhole array data, and show that results compare very well with published data on generic soil conditions and previous geotechnical investigation studies at the array site. By assuming a realistic hysteretic model and estimating the constitutive soil parameters, the proposed inversion accounts for the instantaneous adjustment of soil response to the level and strain and load path during transient loading, and allows results to be used in predictions of nonlinear site effects during future events.

  15. On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

    2015-02-01

    We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions for which there can be uniform resolvent estimates for , , where is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from to the spectrum of . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for . Ours is based on the related Hadamard parametrices for , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

  16. Applying a random encounter model to estimate lion density from camera traps in Serengeti National Park, Tanzania

    PubMed Central

    Cusack, Jeremy J; Swanson, Alexandra; Coulson, Tim; Packer, Craig; Carbone, Chris; Dickman, Amy J; Kosmala, Margaret; Lintott, Chris; Rowcliffe, J Marcus

    2015-01-01

    The random encounter model (REM) is a novel method for estimating animal density from camera trap data without the need for individual recognition. It has never been used to estimate the density of large carnivore species, despite these being the focus of most camera trap studies worldwide. In this context, we applied the REM to estimate the density of female lions (Panthera leo) from camera traps implemented in Serengeti National Park, Tanzania, comparing estimates to reference values derived from pride census data. More specifically, we attempted to account for bias resulting from non-random camera placement at lion resting sites under isolated trees by comparing estimates derived from night versus day photographs, between dry and wet seasons, and between habitats that differ in their amount of tree cover. Overall, we recorded 169 and 163 independent photographic events of female lions from 7,608 and 12,137 camera trap days carried out in the dry season of 2010 and the wet season of 2011, respectively. Although all REM models considered over-estimated female lion density, models that considered only night-time events resulted in estimates that were much less biased relative to those based on all photographic events. We conclude that restricting REM estimation to periods and habitats in which animal movement is more likely to be random with respect to cameras can help reduce bias in estimates of density for female Serengeti lions. We highlight that accurate REM estimates will nonetheless be dependent on reliable measures of average speed of animal movement and camera detection zone dimensions. © 2015 The Authors. Journal of Wildlife Management published by Wiley Periodicals, Inc. on behalf of The Wildlife Society. PMID:26640297

  17. Adaptive Wavelet-based Large Eddy Simulations of Compressible Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2014-11-01

    Adaptive wavelet simulation exploits intermittency in turbulent flows by resolving locally on the most coherent structures, offering improved computational efficiency and a priori fidelity control. Adaptive LES utilizing a wavelet grid filter, hereto developed for incompressible flows, is extended to the compressible regime with a kinetic energy equation-based approach. Nonlinear filtered terms are scaled by the SGS kinetic energy and model coefficients are locally determined through a dynamic procedure. The influence of the modeled terms relative to the resolved physics can be used as a fidelity-based feedback control for the adaptive grid. Several benchmark cases, including turbulent mixing layers, are considered for the validation of this approach across multiple Mach numbers. Of particular interest is capturing compressibility and variable density effects within turbulent flows, notably the reduced growth rate of turbulent shear layer thickness and accurate modeling of the SGS heat flux. These simulations have been performed solving the filtered compressible Navier-Stokes equations with the adaptive wavelet collocation method. This work was supported by NSF under Grant No. CBET-1236505.

  18. Interface State Density between Direct Nitridation Layer and SiC Estimated from Current Voltage Characteristics of MIS Schottky Diode

    NASA Astrophysics Data System (ADS)

    Kamimura, Kiichi; Shiozawa, Hiroaki; Yamakami, Tomohiko; Hayashibe, Rinpei

    Interface state density was estimated from diode factor n of SiC MIS Schottky diode. The interface state density was the order of 1012cm-2eV-1, and was same order to the value for the sample carefully prepared by oxidation and post oxidation annealing. The interface state density determined from n was consistent to the value calculated from the capacitance voltage curve of SiO2/nitride/SiC MIS diode by Terman method. High temperature nitridation was effective to reduce the interface state density.

  19. Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Simons, F. J.; Bozdag, E.

    2014-12-01

    We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.

  20. Wavelet-based data and solution compression for efficient image reconstruction in fluorescence diffuse optical tomography.

    PubMed

    Correia, Teresa; Rudge, Timothy; Koch, Maximilian; Ntziachristos, Vasilis; Arridge, Simon

    2013-08-01

    Current fluorescence diffuse optical tomography (fDOT) systems can provide large data sets and, in addition, the unknown parameters to be estimated are so numerous that the sensitivity matrix is too large to store. Alternatively, iterative methods can be used, but they can be extremely slow at converging when dealing with large matrices. A few approaches suitable for the reconstruction of images from very large data sets have been developed. However, they either require explicit construction of the sensitivity matrix, suffer from slow computation times, or can only be applied to restricted geometries. We introduce a method for fast reconstruction in fDOT with large data and solution spaces, which preserves the resolution of the forward operator whilst compressing its representation. The method does not require construction of the full matrix, and thus allows storage and direct inversion of the explicitly constructed compressed system matrix. The method is tested using simulated and experimental data. Results show that the fDOT image reconstruction problem can be effectively compressed without significant loss of information and with the added advantage of reducing image noise. PMID:23942633

  1. Wavelet-based data and solution compression for efficient image reconstruction in fluorescence diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Correia, Teresa; Rudge, Timothy; Koch, Maximilian; Ntziachristos, Vasilis; Arridge, Simon

    2013-08-01

    Current fluorescence diffuse optical tomography (fDOT) systems can provide large data sets and, in addition, the unknown parameters to be estimated are so numerous that the sensitivity matrix is too large to store. Alternatively, iterative methods can be used, but they can be extremely slow at converging when dealing with large matrices. A few approaches suitable for the reconstruction of images from very large data sets have been developed. However, they either require explicit construction of the sensitivity matrix, suffer from slow computation times, or can only be applied to restricted geometries. We introduce a method for fast reconstruction in fDOT with large data and solution spaces, which preserves the resolution of the forward operator whilst compressing its representation. The method does not require construction of the full matrix, and thus allows storage and direct inversion of the explicitly constructed compressed system matrix. The method is tested using simulated and experimental data. Results show that the fDOT image reconstruction problem can be effectively compressed without significant loss of information and with the added advantage of reducing image noise.

  2. Fecundity estimation by oocyte packing density formulae in determinate and indeterminate spawners: Theoretical considerations and applications

    NASA Astrophysics Data System (ADS)

    Kurita, Yutaka; Kjesbu, Olav S.

    2009-02-01

    This paper explores why the 'Auto-diametric method', currently used in many laboratories to quickly estimate fish fecundity, works well on marine species with a determinate reproductive style but much less so on species with an indeterminate reproductive style. Algorithms describing links between potentially important explanatory variables to estimate fecundity were first established, and these were followed by practical observations in order to validate the method under two extreme situations: 1) straightforward fecundity estimation in a determinate, single-batch spawner: Atlantic herring (AH) Clupea harengus and 2) challenging fecundity estimation in an indeterminate, multiple-batch spawner: Japanese flounder (JF) Paralichthys olivaceus. The Auto-diametric method relies on the successful prediction of the number of vitellogenic oocytes (VTO) per gram ovary (oocyte packing density; OPD) from the mean VTO diameter. Theoretically, OPD could be reproduced by the following four variables; OD V (volume-based mean VTO diameter, which deviates from arithmetic mean VTO diameter), VFvto (volume fraction of VTO in the ovary), ρo (specific gravity of the ovary) and k (VTO shape, i.e. ratio of long and short oocyte axes). VF vto, ρ o and k were tested in relation to growth in OD V. The dynamic range throughout maturation was clearly highest in VF vto. As a result, OPD was mainly influenced by OD V and secondly by VFvto. Log (OPD) for AH decreased as log (OD V) increased, while log (OPD) for JF first increased during early vitellogenesis, then decreased during late vitellogenesis and spawning as log (OD V) increased. These linear regressions thus behaved statistically differently between species, and associated residuals fluctuated more for JF than for AH. We conclude that the OPD-OD V relationship may be better expressed by several curves that cover different parts of the maturation cycle rather than by one curve that cover all these parts. This seems to be particularly true for indeterminate spawners. A correction factor for vitellogenic atresia was included based on the level of atresia and the size of atretic oocytes in relation to normal oocytes finding that OPD would be biased when smaller atretic oocytes are present but not accounted for. Furthermore, special care should be taken when collecting sub-samples to make them as representative as possible of the whole ovary, including in terms of relative amount of ovarian wall and stroma. Theoretical consideration, along with original, high-quality information regarding the above-listed variables made it possible to reproduce very accurately the observed changes in OPD, but not yet precisely enough at the individual level in indeterminate spawners.

  3. Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

    NASA Technical Reports Server (NTRS)

    Grau, David

    2012-01-01

    This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.

  4. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

  5. Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites

    PubMed Central

    Mitchard, Edward T A; Feldpausch, Ted R; Brienen, Roel J W; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R; Lewis, Simon L; Lloyd, Jon; Quesada, Carlos A; Gloor, Manuel; ter Steege, Hans; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragão, Luiz E O C; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I; Cerón, Carlos E; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R C; Di Fiore, Anthony; Domingues, Tomas F; Erwin, Terry L; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N; Killeen, Tim J; Laurance, William F; Levis, Carolina; Magnusson, William E; Marimon, Beatriz S; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T; Neill, David; Núñez Vargas, Mario P; Palacios, Walter A; Parada, Alexander; Pardo Molina, Guido; Peña-Claros, Marielos; Pitman, Nigel; Peres, Carlos A; Poorter, Lourens; Prieto, Adriana; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H; Rudas, Agustin; Salomão, Rafael P; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F; Steininger, Marc K; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R; van der Heijden, Geertje M F; Vieira, Ima C G; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A; Wang, Ophelia; Zartman, Charles E; Malhi, Yadvinder; Phillips, Oliver L

    2014-01-01

    Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass directly, and thus may miss significant spatial variations in forest structure. We test the stated accuracy of pantropical carbon maps using a large independent field dataset. Location Tropical forests of the Amazon basin. The permanent archive of the field plot data can be accessed at: http://dx.doi.org/10.5521/FORESTPLOTS.NET/2014_1 Methods Two recent pantropical RS maps of vegetation carbon are compared to a unique ground-plot dataset, involving tree measurements in 413 large inventory plots located in nine countries. The RS maps were compared directly to field plots, and kriging of the field data was used to allow area-based comparisons. Results The two RS carbon maps fail to capture the main gradient in Amazon forest carbon detected using 413 ground plots, from the densely wooded tall forests of the north-east, to the light-wooded, shorter forests of the south-west. The differences between plots and RS maps far exceed the uncertainties given in these studies, with whole regions over- or under-estimated by > 25%, whereas regional uncertainties for the maps were reported to be < 5%. Main conclusions Pantropical biomass maps are widely used by governments and by projects aiming to reduce deforestation using carbon offsets, but may have significant regional biases. Carbon-mapping techniques must be revised to account for the known ecological variation in tree wood density and allometry to create maps suitable for carbon accounting. The use of single relationships between tree canopy height and above-ground biomass inevitably yields large, spatially correlated errors. This presents a significant challenge to both the forest conservation and remote sensing communities, because neither wood density nor species assemblages can be reliably mapped from space. PMID:26430387

  6. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  7. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    PubMed

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609

  8. The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions

    USGS Publications Warehouse

    Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

    2002-01-01

    The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

  9. Multiscale Systematic Error Correction via Wavelet-Based Band Splitting and Bayesian Error Modeling in Kepler Light Curves

    NASA Astrophysics Data System (ADS)

    Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

    2012-05-01

    Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

  10. Estimated uncertainty of calculated liquefied natural gas density from a comparison of NBS and Gaz de France densimeter test facilities

    SciTech Connect

    Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

    1982-12-16

    Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

  11. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  12. Exploration of diffusion kernel density estimation in agricultural drought risk analysis: a case study in Shandong, China

    NASA Astrophysics Data System (ADS)

    Chen, W.; Shao, Z.; Tiong, L. K.

    2015-11-01

    Drought caused the most widespread damage in China, making up over 50 % of the total affected area nationwide in recent decades. In the paper, a Standardized Precipitation Index-based (SPI-based) drought risk study is conducted using historical rainfall data of 19 weather stations in Shandong province, China. Kernel density based method is adopted to carry out the risk analysis. Comparison between the bivariate Gaussian kernel density estimation (GKDE) and diffusion kernel density estimation (DKDE) are carried out to analyze the effect of drought intensity and drought duration. The results show that DKDE is relatively more accurate without boundary-leakage. Combined with the GIS technique, the drought risk is presented which reveals the spatial and temporal variation of agricultural droughts for corn in Shandong. The estimation provides a different way to study the occurrence frequency and severity of drought risk from multiple perspectives.

  13. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest

    PubMed Central

    Hearn, Andrew J.; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T. B.; Macdonald, David W.

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219

  14. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest.

    PubMed

    Hearn, Andrew J; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T B; Macdonald, David W

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219

  15. Estimation of tiger densities in the tropical dry forests of Panna, Central India, using photographic capture-recapture sampling

    USGS Publications Warehouse

    Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.

    2004-01-01

    Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

  16. Estimation of ocelot density in the pantanal using capture-recapture analysis of camera-trapping data

    USGS Publications Warehouse

    Trolle, M.; Kery, M.

    2003-01-01

    Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

  17. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    NASA Astrophysics Data System (ADS)

    Ge, Shaokui; Smith, Richard G.; Jacovides, Constantinos P.; Kramer, Marc G.; Carruthers, Raymond I.

    2011-08-01

    Plants require solar radiation for photosynthesis and their growth is directly related to the amount received, assuming that other environmental parameters are not limiting. Therefore, precise estimation of photosynthetically active radiation (PAR) is necessary to enhance overall accuracies of plant growth models. This study aimed to explore the PAR radiant flux in the San Francisco Bay Area of northern California. During the growing season (March through August) for 2 years 2007-2008, the on-site magnitudes of photosynthetic photon flux densities (PPFD) were investigated and then processed at both the hourly and daily time scales. Combined with global solar radiation ( R S) and simulated extraterrestrial solar radiation, five PAR-related values were developed, i.e., flux density-based PAR (PPFD), energy-based PAR (PARE), from-flux-to-energy conversion efficiency (fFEC), and the fraction of PAR energy in the global solar radiation (fE), and a new developed indicator—lost PARE percentages (LPR)—when solar radiation penetrates from the extraterrestrial system to the ground. These PAR-related values indicated significant diurnal variation, high values occurring at midday, with the low values occurring in the morning and afternoon hours. During the entire experimental season, the overall mean hourly value of fFEC was found to be 2.17 μmol J-1, while the respective fE value was 0.49. The monthly averages of hourly fFEC and fE at the solar noon time ranged from 2.15 in March to 2.39 μmol J-1 in August and from 0.47 in March to 0.52 in July, respectively. However, the monthly average daily values were relatively constant, and they exhibited a weak seasonal variation, ranging from 2.02 mol MJ-1 and 0.45 (March) to 2.19 mol MJ-1 and 0.48 (June). The mean daily values of fFEC and fE at the solar noon were 2.16 mol MJ-1 and 0.47 across the entire growing season, respectively. Both PPFD and the ever first reported LPR showed strong diurnal patterns. However, they had opposite trends. PPFD was high around noon, resulting in low values of LPR during the same time period. Both were found to be highly correlated with global solar radiation R S, solar elevation angle h, and the clearness index K t. Using the best subset selection of variables, two parametric models were developed for estimating PPFD and LPR, which can easily be applied in radiometric sites, by recording only global solar radiation measurements. These two models were found to be involved with the most commonly measured global solar radiation ( R S) and two large-scale geometric parameters, i.e., extraterrestrial solar radiation and solar elevation. The models were therefore insensitive to local weather conditions such as temperature. In particular, with two test data sets collected in USA and Greece, it was verified that the models could be extended across different geographical areas, where they performed well. Therefore, these two hourly based models can be used to provide precise PAR-related values, such as those required for developing precise vegetation growth models.

  18. Estimation of tool pose based on force-density correlation during robotic drilling.

    PubMed

    Williamson, Tom M; Bell, Brett J; Gerber, Nicolas; Salas, Lilibeth; Zysset, Philippe; Caversaccio, Marco; Weber, Stefan

    2013-04-01

    The application of image-guided systems with or without support by surgical robots relies on the accuracy of the navigation process, including patient-to-image registration. The surgeon must carry out the procedure based on the information provided by the navigation system, usually without being able to verify its correctness beyond visual inspection. Misleading surrogate parameters such as the fiducial registration error are often used to describe the success of the registration process, while a lack of methods describing the effects of navigation errors, such as those caused by tracking or calibration, may prevent the application of image guidance in certain accuracy-critical interventions. During minimally invasive mastoidectomy for cochlear implantation, a direct tunnel is drilled from the outside of the mastoid to a target on the cochlea based on registration using landmarks solely on the surface of the skull. Using this methodology, it is impossible to detect if the drill is advancing in the correct direction and that injury of the facial nerve will be avoided. To overcome this problem, a tool localization method based on drilling process information is proposed. The algorithm estimates the pose of a robot-guided surgical tool during a drilling task based on the correlation of the observed axial drilling force and the heterogeneous bone density in the mastoid extracted from 3-D image data. We present here one possible implementation of this method tested on ten tunnels drilled into three human cadaver specimens where an average tool localization accuracy of 0.29 mm was observed. PMID:23269744

  19. Estimating stand structure using discrete-return lidar: an example from low density, fire prone ponderosa pine forests

    USGS Publications Warehouse

    Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.

    2005-01-01

    The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.

  20. A wavelet-based multi-scale spatiotemporal filtering approach for monitoring Earth surface deformation using space geodesy

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Lundgren, P.; Rosen, P. A.; Agram, P.

    2013-12-01

    Accurate imaging of deformation processes in plate boundary zones at various space-time scales is crucial to advancing our knowledge of plate boundary tectonics and volcano dynamics. Space-borne geodetic measurements such as interferometric synthetic aperture radar (InSAR) and continuous GPS (CGPS) provide complementary measurements of surface deformation. InSAR provides the line-of-sight measurements that are spatially dense but temporally coarse while point-based GPS measurements provide 3-D displacement components at sub-daily to daily temporal interval but are limited when trying to resolve fine-scale deformation processes depending on station distribution and spacing. The large volume of SAR data from existing satellite platforms and future SAR missions and GPS time series from large-scale CGPS networks (e.g, Earthscope/PBO) call for efficient approaches to integrate these two data for maximal extraction of the signal of interest and imaging time-variable deformation processes. We present a wavelet based spatiotemporal filtering approach to integrate InSAR and GPS data at multi-scale level in space and time. The approach consists of a series of InSAR noise correction modules that are based on wavelet multi-resolution analysis (MRA) for correcting major noise components in InSAR images and the InSAR time series analysis that combines MRA and small baseline least-squares inversion with temporal filtering (wavelet or Kalman filter based) to filter out turbulent troposphere noise. It also exploits a novel way that considers temporal correlation between InSAR and GPS time series at a multi-scale level and reconstruct surface deformation measurements in dense spatial and temporal sampling. Compared to other approaches, this approach does not require a priori parameterization of temporal behaviors and provides a general way to discover signals of interest at different spatiotemporal scales. We present test cases where known signals with realistic noise components are synthesized for analysis and comparison. We are in the process of improving the approach and generalizing it to real-world applications.

  1. MEASUREMENT OF OAK TREE DENSITY WITH LANDSAT TM DATA FOR ESTIMATING BIOGENIC ISOPRENE EMISSIONS IN TENNESSEE, USA: JOURNAL ARTICLE

    EPA Science Inventory

    JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...

  2. Primates in Human-Modified and Fragmented Landscapes: The Conservation Relevance of Modelling Habitat and Disturbance Factors in Density Estimation

    PubMed Central

    Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco

    2016-01-01

    Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, ‘canonical’ density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891

  3. Primates in Human-Modified and Fragmented Landscapes: The Conservation Relevance of Modelling Habitat and Disturbance Factors in Density Estimation.

    PubMed

    Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco

    2016-01-01

    Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, 'canonical' density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891

  4. Two-component wind fields from scanning aerosol lidar and motion estimation algorithms

    NASA Astrophysics Data System (ADS)

    Mayor, Shane D.; Dérian, Pierre; Mauzey, Christopher F.; Hamada, Masaki

    2013-09-01

    We report on the implementation and testing of a new wavelet-based motion estimation algorithm to estimate horizontal vector wind fields in real-time from horizontally-scanning elastic backscatter lidar data, and new experimental results from field work conducted in Chico, California, during the summer of 2013. We also highlight some limitations of a traditional cross-correlation method and compare the results of the wavelet-based method with those from the cross-correlation method and wind measurements from a Doppler lidar.

  5. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  6. A field comparison of nested grid and trapping web density estimators

    USGS Publications Warehouse

    Jett, D.A.; Nichols, J.D.

    1987-01-01

    The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.

  7. Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range

    NASA Technical Reports Server (NTRS)

    Krupenio, N. N.

    1977-01-01

    The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.

  8. Coronal electron density distributions estimated from deca-hectometer type II radio bursts and coronal mass ejections

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Ok; Moon, Yong-Jae; Lee, Jin-Yi; Lee, Kyoung-Sun; Kim, Rok-Soon

    2015-04-01

    In this study, we estimate coronal electron density distributions by analyzing DH type II radio observations based on the assumption: a DH type II radio burst is generated by the shock formed at a CME leading edge. For this, we consider 11 Wind/WAVES DH type II radio bursts (from 2000 to 2003 and from 2010 to 2012) associated with SOHO/LASCO limb CMEs using the following criteria: (1) the fundamental and second harmonic emission lanes are well identified in the frequency range of 1 to 14 MHz; (2) its associated CME is clearly identified at least twice in the LASCO-C2 or C3 field of view during the time of type II observation. For these events, we determine the lowest frequencies of their fundamental emission lanes and the heights of their leading edges. Coronal electron density distributions are obtained by minimizing the root mean square error between the observed heights of CME leading edges and the heights of DH type II radio bursts from assumed electron density distributions. We find that the estimated coronal electron density distribution range from 2.5 to 10.2-fold Saito’s coronal electron density models.

  9. Density estimates of rural dog populations and an assessment of marking methods during a rabies vaccination campaign in the Philippines.

    PubMed

    Childs, J E; Robinson, L E; Sadek, R; Madden, A; Miranda, M E; Miranda, N L

    1998-01-01

    We estimated the population density of dogs by distance sampling and assessed the potential utility of two marking methods for capture-mark-recapture applications following a mass canine rabies-vaccination campaign in Sorsogon Province, the Republic of the Philippines. Thirty villages selected to assess vaccine coverage and for dog surveys were visited 1 to 11 days after the vaccinating team. Measurements of the distance of dogs or groups of dogs from transect lines were obtained in 1088 instances (N = 1278 dogs; mean group size = 1.2). Various functions modelling the probability of detection were fitted to a truncated distribution of distances of dogs from transect lines. A hazard rate model provided the best fit and an overall estimate of dog-population density of 468/km2 (95% confidence interval, 359 to 611). At vaccination, most dogs were marked with either a paint stick or a black plastic collar. Overall, 34.8% of 2167 and 28.5% of 2115 dogs could be accurately identified as wearing a collar or showing a paint mark; 49.1% of the dogs had either mark. Increasing time interval between vaccination-team visit and dog survey and increasing distance from transect line were inversely associated with the probability of observing a paint mark. Probability of observing a collar was positively associated with increasing estimated density of the dog population in a given village and with animals not associated with a house. The data indicate that distance sampling is a relatively simple and adaptable method for estimating dog-population density and is not prone to problems associated with meeting some model assumptions inherent to mark-recapture estimators. PMID:9500175

  10. Estimates of volumetric bone density from projectional measurements improve the discriminatory capability of dual X-ray absorptiometry

    NASA Technical Reports Server (NTRS)

    Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

    1995-01-01

    To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.

  11. Using hole screening effect on hole-phonon interaction to estimate hole density in Mg-doped InN

    NASA Astrophysics Data System (ADS)

    Su, Yi-En; Wen, Yu-Chieh; Hong, Yu-Liang; Lee, Hong-Mao; Gwo, Shangjr; Lin, Yuan-Ting; Tu, Li-Wei; Liu, Hsiang-Lin; Sun, Chi-Kuang

    2011-06-01

    The screening effect of heavy-hole LO-phonon interaction is observed and studied through the pump-probe transmission measurement in Mg-doped InN. Combining the measured transient hole dynamics with the absorption spectra, the optical based observation is able to prevent the influence of the surface n-type layer and the depression layer in Mg-doped InN. With the observed heavy-hole heating time at different photoexcited carrier densities and the measured absorption edge, we show that it is now possible to estimate the background hole density and band gap energy in Mg-doped InN.

  12. Variability of footprint ridge density and its use in estimation of sex in forensic examinations.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Pathania, Annu; Sharma, Ruchika; DiMaggio, John A

    2015-10-01

    The present study deals with a comparatively new biometric parameter of footprints called footprint ridge density. The study attempts to evaluate sex-dependent variations in ridge density in different areas of the footprint and its usefulness in discriminating sex in the young adult population of north India. The sample for the study consisted of 160 young adults (121 females) from north India. The left and right footprints were taken from each subject according to the standard procedures. The footprints were analysed using a 5 mm × 5 mm square and the ridge density was calculated in four different well-defined areas of the footprints. These were: F1 - the great toe on its proximal and medial side; F2 - the medial ball of the footprint, below the triradius (the triradius is a Y-shaped group of ridges on finger balls, palms and soles which forms the basis of ridge counting in identification); F3 - the lateral ball of the footprint, towards the most lateral part; and F4 - the heel in its central part where the maximum breadth at heel is cut by a perpendicular line drawn from the most posterior point on heel. This value represents the number of ridges in a 25 mm(2) area and reflects the ridge density value. Ridge densities analysed on different areas of footprints were compared with each other using the Friedman test for related samples. The total footprint ridge density was calculated as the sum of the ridge density in the four areas of footprints included in the study (F1 + F2 + F3 + F4). The results show that the mean footprint ridge density was higher in females than males in all the designated areas of the footprints. The sex differences in footprint ridge density were observed to be statistically significant in the analysed areas of the footprint, except for the heel region of the left footprint. The total footprint ridge density was also observed to be significantly higher among females than males. A statistically significant correlation is shown in the ridge densities among most areas of both left and right sides. Based on receiver operating characteristic (ROC) curve analysis, the sexing potential of footprint ridge density was observed to be considerably higher on the right side. The sexing potential for the four areas ranged between 69.2% and 85.3% on the right side, and between 59.2% and 69.6% on the left side. ROC analysis of the total footprint ridge density shows that the sexing potential of the right and left footprint was 91.5% and 77.7% respectively. The study concludes that footprint ridge density can be utilised in the determination of sex as a supportive parameter. The findings of the study should be utilised only on the north Indian population and may not be internationally generalisable. PMID:25413487

  13. Wavelet-based multiscale analysis of bioimpedance data measured by electric cell-substrate impedance sensing for classification of cancerous and normal cells

    NASA Astrophysics Data System (ADS)

    Das, Debanjan; Shiladitya, Kumar; Biswas, Karabi; Dutta, Pranab Kumar; Parekh, Aditya; Mandal, Mahitosh; Das, Soumen

    2015-12-01

    The paper presents a study to differentiate normal and cancerous cells using label-free bioimpedance signal measured by electric cell-substrate impedance sensing. The real-time-measured bioimpedance data of human breast cancer cells and human epithelial normal cells employs fluctuations of impedance value due to cellular micromotions resulting from dynamic structural rearrangement of membrane protrusions under nonagitated condition. Here, a wavelet-based multiscale quantitative analysis technique has been applied to analyze the fluctuations in bioimpedance. The study demonstrates a method to classify cancerous and normal cells from the signature of their impedance fluctuations. The fluctuations associated with cellular micromotion are quantified in terms of cellular energy, cellular power dissipation, and cellular moments. The cellular energy and power dissipation are found higher for cancerous cells associated with higher micromotions in cancer cells. The initial study suggests that proposed wavelet-based quantitative technique promises to be an effective method to analyze real-time bioimpedance signal for distinguishing cancer and normal cells.

  14. A wavelet-based method for the forced vibration analysis of piecewise linear single- and multi-DOF systems with application to cracked beam dynamics

    NASA Astrophysics Data System (ADS)

    Joglekar, D. M.; Mitra, M.

    2015-12-01

    The present investigation outlines a method based on the wavelet transform to analyze the vibration response of discrete piecewise linear oscillators, representative of beams with breathing cracks. The displacement and force variables in the governing differential equation are approximated using Daubechies compactly supported wavelets. An iterative scheme is developed to arrive at the optimum transform coefficients, which are back-transformed to obtain the time-domain response. A time-integration scheme, solving a linear complementarity problem at every time step, is devised to validate the proposed wavelet-based method. Applicability of the proposed solution technique is demonstrated by considering several test cases involving a cracked cantilever beam modeled as a bilinear SDOF system subjected to a harmonic excitation. In particular, the presence of higher-order harmonics, originating from the piecewise linear behavior, is confirmed in all the test cases. Parametric study involving the variations in the crack depth, and crack location is performed to bring out their effect on the relative strengths of higher-order harmonics. Versatility of the method is demonstrated by considering the cases such as mixed-frequency excitation and an MDOF oscillator with multiple bilinear springs. In addition to purporting the wavelet-based method as a viable alternative to analyze the response of piecewise linear oscillators, the proposed method can be easily extended to solve inverse problems unlike the other direct time integration schemes.

  15. Inverse estimation of parameters for multidomain flow models in soil columns with different macropore densities.

    PubMed

    Arora, Bhavna; Mohanty, Binayak P; McGuire, Jennifer T

    2011-04-01

    Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term"macropore density" is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165

  16. Estimation of Vegetation Aerodynamic Roughness of Natural Regions Using Frontal Area Density Determined from Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Crago, Richard

    1994-01-01

    Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.

  17. A Mass and Density Estimate for the Unshocked Ejecta in Cas A based on Low Frequency Radio Data

    NASA Astrophysics Data System (ADS)

    DeLaney, Tracey; Kassim, N.; Rudnick, L.; Isensee, K.

    2012-01-01

    One of the key discoveries from the spectral mapping of Cassiopeia A with the Spitzer Space Telescope was the discovery of infrared emission from cold silicon- and oxygen-rich ejecta interior to the reverse shock. When mapped into three dimensions, the ejecta distribution, including both hot and cold ejecta, appears quite flattened. On the front and back sides of Cas A, the Si- and O-rich ejecta have yet to reach the reverse shock while around the edge these layers are currently encountering the reverse shock giving rise to the Bright Ring structure that dominates Cas A's X-ray, optical, and radio morphology. In addition to morphology, the density and total mass remaining in the cold, unshocked ejecta are important parameters for modeling Cas A's explosion and subsequent evolution. The density estimated from the Spitzer data is not particularly useful (upper limit of 100/cm^3), however the cold ejecta are also observed via free-free absorption at low radio frequencies. Using Very Large Array observations at 330 and 74 MHz, we have a new density estimate of 2.3/cm^3 and a total mass estimate of 0.44 M_solar for the cold, unshocked ejecta. Our estimates are sensitive to a number of factors including temperature and geometry but we are quite pleased that our unshocked mass estimate is within a factor of two of estimates based on dynamical models. We will also ponder the presence, or absence, of cold iron- and carbon-rich ejecta and how these affect our calculations.

  18. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  19. Pattern recognition algorithms for density estimation of asphalt pavement during compaction: a simulation study

    NASA Astrophysics Data System (ADS)

    Shangguan, Pengcheng; Al-Qadi, Imad L.; Lahouar, Samer

    2014-08-01

    This paper presents the application of artificial neural network (ANN) based pattern recognition to extract the density information of asphalt pavement from simulated ground penetrating radar (GPR) signals. This study is part of research efforts into the application of GPR to monitor asphalt pavement density during compaction. The main challenge is to eliminate the effect of roller-sprayed water on GPR signals during compaction and to extract density information accurately. A calibration of the excitation function was conducted to provide an accurate match between the simulated signal and the real signal. A modified electromagnetic mixing model was then used to calculate the dielectric constant of asphalt mixture with water. A large database of GPR responses was generated from pavement models having different air void contents and various surface moisture contents using finite-difference time-domain simulation. Feature extraction was performed to extract density-related features from the simulated GPR responses. Air void contents were divided into five classes representing different compaction statuses. An ANN-based pattern recognition system was trained using the extracted features as inputs and air void content classes as target outputs. Accuracy of the system was tested using test data set. Classification of air void contents using the developed algorithm is found to be highly accurate, which indicates effectiveness of this method to predict asphalt concrete density.

  20. Estimating the functional form for the density dependence from life history data.

    PubMed

    Coulson, T; Ezard, T H G; Pelletier, F; Tavecchia, G; Stenseth, N C; Childs, D Z; Pilkington, J G; Pemberton, J M; Kruuk, L E B; Clutton-Brock, T H; Crawley, M J

    2008-06-01

    Two contrasting approaches to the analysis of population dynamics are currently popular: demographic approaches where the associations between demographic rates and statistics summarizing the population dynamics are identified; and time series approaches where the associations between population dynamics, population density, and environmental covariates are investigated. In this paper, we develop an approach to combine these methods and apply it to detailed data from Soay sheep (Ovis aries). We examine how density dependence and climate contribute to fluctuations in population size via age- and sex-specific demographic rates, and how fluctuations in demographic structure influence population dynamics. Density dependence contributes most, followed by climatic variation, age structure fluctuations and interactions between density and climate. We then simplify the density-dependent, stochastic, age-structured demographic model and derive a new phenomenological time series which captures the dynamics better than previously selected functions. The simple method we develop has potential to provide substantial insight into the relative contributions of population and individual-level processes to the dynamics of populations in stochastic environments. PMID:18589530

  1. Comparison of precision orbit derived density estimates for CHAMP and GRACE satellites

    NASA Astrophysics Data System (ADS)

    Fattig, Eric Dale

    Current atmospheric density models cannot adequately represent the density variations observed by satellites in Low Earth Orbit (LEO). Using an optimal orbit determination process, precision orbit ephemerides (POE) are used as measurement data to generate corrections to density values obtained from existing atmospheric models. Densities obtained using these corrections are then compared to density data derived from the onboard accelerometers of satellites, specifically the CHAMP and GRACE satellites. This comparison takes two forms, cross correlation analysis and root mean square analysis. The densities obtained from the POE method are nearly always superior to the empirical models, both in matching the trends observed by the accelerometer (cross correlation), and the magnitudes of the accelerometer derived density (root mean square). In addition, this method consistently produces better results than those achieved by the High Accuracy Satellite Drag Model (HASDM). For satellites orbiting Earth that pass through Earth's upper atmosphere, drag is the primary source of uncertainty in orbit determination and prediction. Variations in density, which are often not modeled or are inaccurately modeled, cause difficulty in properly calculating the drag acting on a satellite. These density variations are the result of many factors; however, the Sun is the main driver in upper atmospheric density changes. The Sun influences the densities in Earth's atmosphere through solar heating of the atmosphere, as well as through geomagnetic heating resulting from the solar wind. Data are examined for fourteen hour time spans between November 2004 and July 2009 for both the CHAMP and GRACE satellites. This data spans all available levels of solar and geomagnetic activity, which does not include data in the elevated and high solar activity bins due to the nature of the solar cycle. Density solutions are generated from corrections to five different baseline atmospheric models, as well as nine combinations of density and ballistic coefficient correlated half-lives. These half-lives are varied among values of 1.8, 18, and 180 minutes. A total of forty-five sets of results emerge from the orbit determination process for all combinations of baseline density model and half-lives. Each time period is examined for both CHAMP and GRACE-A, and the results are analyzed. Results are averaged from all solutions periods for 2004--2007. In addition, results are averaged after binning according to solar and geomagnetic activity levels. For any given day in this period, a ballistic coefficient correlated half-life of 1.8 minutes yields the best correlation and root mean square values for both CHAMP and GRACE. For CHAMP, a density correlated half-life of 18 minutes is best for higher levels of solar and geomagnetic activity, while for lower levels 180 minutes is usually superior. For GRACE, 180 minutes is nearly always best. The three Jacchia-based atmospheric models yield very similar results. The CIRA 1972 or Jacchia 1971 models as baseline consistently produce the best results for both satellites, though results obtained for Jacchia-Roberts are very similar to the other Jacchia-based models. Data are examined in a similar manner for the extended solar minimum period during 2008 and 2009, albeit with a much smaller sampling of data. With the exception of some atypical results, similar combinations of half-lives and baseline atmospheric model produce the best results. A greater sampling of data will aid in characterizing density in a period of especially low solar activity. In general, cross correlation values for CHAMP and GRACE revealed that the POE method matched trends observed by the accelerometers very well. However, one period of time deviated from this trend for the GRACE-A satellite. Between late October 2005 and January 2006, correlations for GRACE-A were very low. Special examination of the surrounding months revealed the extent of time this period covered. Half-life and baseline model combinations that produced the best results during this time were similar to those during normal periods. Plotting these periods revealed very short period density variations in the accelerometer that could not be reproduced by the empirical models, HASDM, or the POE method. Finally, densities produced using precision orbit data for the GRACE-B satellite were shown to be nearly indistinguishable from those produced by GRACE-A. Plots of the densities produced for both satellites during the same time periods revealed this fact. Multiple days were examined covering all possible ranges of solar and geomagnetic activity. In addition, the period in which GRACE-A correlations were low was studied. No significant differences existed between GRACE-A and GRACE-B for all of the days examined.

  2. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  3. Inverse estimation of parameters for multidomain flow models in soil columns with different macropore densities

    PubMed Central

    Arora, Bhavna; Mohanty, Binayak P.; McGuire, Jennifer T.

    2013-01-01

    Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term“macropore density” is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165

  4. Estimation of density and population size and recommendations for monitoring trends of Bahama parrots on Great Abaco and Great Inagua

    USGS Publications Warehouse

    Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.

    2005-01-01

    Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.

  5. Some Interesting Facts about Correlation Between Gravity Anomalies and Heights with Implications Towards the Correction Density Estimation

    NASA Astrophysics Data System (ADS)

    Mikuka, J.; Maruiak, I.; Zahorec, P.; Pap?o, J.; Pasteka, R.; Bielik, M.

    2014-12-01

    It has been well known that free-air anomalies and gravitational effects of the topographic masses are mutually proportional, at least in general. However, it is rather intriguing that this feature is more remarkable in elevated mountainous areas than in lowlands or flat regions, as we demonstrate on practical examples. Further, since the times of Pierre Bouguer we know that gravitational effect of the topographic masses is station-height-dependent. In our presentation we show that the respective contributions to this height dependence, although they are nonzero, are less significant in the cases of both the nearest masses and the more remote ones while the contribution of the masses within hundreds and thousands of meters from the gravity station is dominant. We also illustrate that, surprisingly, gravitational effects of the non-near topographic masses can be apparently independent on their respective volumes, while their gravitational effects are still well proportional to the gravity station heights. On the other hand, based on interpretational reasons, Bouguer anomaly should not correlate very much with the heights of the measuring points or, more specifically, with the gravitational effect of the topographic masses. Standard practice is to estimate a suitable (uniform) reduction or correction density within the study area in order to minimize such an undesired correlation and, vice versa, the minimum correlation is often utilized as a criteria for estimating such density. Our main objective is to point out, from the aspect of the correction density estimations, that the contributions of the topographic masses should be viewed alternatively, depending on the particular distances of the respective portions of those masses from the gravity station. We have tested majority of the existing methods of such density estimation and developed a new one which takes the facts mentioned above into consideration. This work was supported by the Slovak Research and Development Agency under the contracts APVV-0827-12 and APVV-0194-10.

  6. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    PubMed

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368

  7. Estimation of the low-density (beta) lipoproteins of serum in health and disease using large molecular weight dextran sulphate

    PubMed Central

    Walton, K. W.; Scott, P. J.

    1964-01-01

    Studies have been made of the factors affecting the specificity of the interaction between high molecular weight dextran sulphate and low-density lipoproteins, both in pure solution and in serum. The results have been used in the development of a simple assay method for the serum concentration of low-density lipoproteins in small volumes of serum. The results obtained by this assay procedure have been found to correlate acceptably with parallel estimations of low-density lipoproteins by an ultracentrifugal technique and by paper electrophoresis. The technique has been applied to a survey of serum levels of these proteins in a normal population. The results have been compared with data in the literature. Satisfactory agreement was found between mean levels, matched for age and sex, between the dextran sulphate method and those methods based ultimately on chemical estimation of one or more components of the isolated lipoproteins. A systematic difference was observed when the dextran sulphate method was compared with estimates based on analytical ultracentrifugation or turbidimetry using amylopectin sulphate. Some indication of the range of application of the dextran sulphate method in clinical chemistry is provided. Images PMID:14227432

  8. Estimation of the radial size and density fluctuation amplitude of edge localized modes using microwave interferometer array

    NASA Astrophysics Data System (ADS)

    Ayub, M. K.; Yun, G. S.; Leem, J.; Kim, M.; Lee, W.; Park, H. K.

    2016-03-01

    A novel technique to estimate the range of radial size and density fluctuation amplitude of edge localized modes (ELMs) in the KSTAR tokamak plasma is presented. A microwave imaging reflectometry (MIR) system is reconfigured as a multi-channel microwave interferometer array (MIA) to measure the density fluctuations associated with ELMs, while electron cyclotron emission imaging (ECEI) system is used as a reference diagnostics to confirm the MIA observation. Two dimensional full-wave (FWR2D) simulations integrated with optics simulation are performed to investigate the Gaussian beam propagation and reflection through the plasma as well as the MIA optical components and obtain the interferometric phase undulations of individual channels at the detector plane due to ELM perturbation. The simulation results show that the amplitude of the phase undulation depends linearly on both radial size and density perturbation amplitude of ELM. For a typical discharge with ELMs, it is estimated that the ELM structure observed by the MIA system has density perturbation amplitude in the range ~ 7 % to 14 % while radial size in the range ~ 1 to 3 cm.

  9. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...

  10. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    NASA Astrophysics Data System (ADS)

    Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.

    2016-03-01

    In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an important factor to govern this trend. It is further observed that the CMD has good correlation with O3, SO2 and ambient RH, suggesting that possibly sulfate secondary materials have a substantial contribution in particle effective density. This approach can be useful for real-time measurement of effective density of both laboratory-generated and ambient aerosol particles, which is very important for studying the physico-chemical properties of particles.

  11. An empirical model to estimate density of sodium hydroxide solution: An activator of geopolymer concretes

    NASA Astrophysics Data System (ADS)

    Rajamane, N. P.; Nataraja, M. C.; Jeyalakshmi, R.; Nithiyanantham, S.

    2016-02-01

    Geopolymer concrete is zero-Portland cement concrete containing alumino-silicate based inorganic polymer as binder. The polymer is obtained by chemical activation of alumina and silica bearing materials, blast furnace slag by highly alkaline solutions such as hydroxide and silicates of alkali metals. Sodium hydroxide solutions of different concentrations are commonly used in making GPC mixes. Often, it is seen that sodium hydroxide solution of very high concentration is diluted with water to obtain SHS of desired concentration. While doing so it was observed that the solute particles of NaOH in SHS tend to occupy lower volumes as the degree of dilution increases. This aspect is discussed in this paper. The observed phenomenon needs to be understood while formulating the GPC mixes since this influences considerably the relationship between concentration and density of SHS. This paper suggests an empirical formula to relate density of SHS directly to concentration expressed by w/w.

  12. Estimating the effective density of engineered nanomaterials for in vitro dosimetry.

    PubMed

    DeLoid, Glen; Cohen, Joel M; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

    2014-01-01

    The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174

  13. Estimating the effective density of engineered nanomaterials for in vitro dosimetry

    NASA Astrophysics Data System (ADS)

    Deloid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

    2014-03-01

    The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro.

  14. Comparison of volumetric breast density estimations from mammography and thorax CT.

    PubMed

    Geeraert, N; Klausz, R; Cockmartin, L; Muller, S; Bosmans, H; Bloch, I

    2014-08-01

    Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations. PMID:25049219

  15. Comparison of volumetric breast density estimations from mammography and thorax CT

    NASA Astrophysics Data System (ADS)

    Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.

    2014-08-01

    Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.

  16. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  17. Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…

  18. Numerical estimation of bone density and elastic constants distribution in a human mandible.

    PubMed

    Reina, J M; García-Aznar, J M; Domínguez, J; Doblaré, M

    2007-01-01

    In this paper, we try to predict the distribution of bone density and elastic constants in a human mandible, based on the stress level produced by mastication loads using a mathematical model of bone remodelling. These magnitudes are needed to build finite element models for the simulation of the mandible mechanical behavior. Such a model is intended for use in future studies of the stability of implant-supported dental prostheses. Various models of internal bone remodelling, both phenomenological and more recently mechanobiological, have been developed to determine the relation between bone density and the stress level that bone supports. Among the phenomenological models, there are only a few that are also able to reproduce the level of anisotropy. These latter have been successfully applied to long bones, primarily the femur. One of these models is here applied to the human mandible, whose corpus behaves as a long bone. The results of bone density distribution and level of anisotropy in different parts of the mandible have been compared with various clinical studies, with a reasonable level of agreement. PMID:16687149

  19. New treatments of density fluctuations and recurrence times for re-estimating Zermelo’s paradox

    NASA Astrophysics Data System (ADS)

    Michel, Denis

    What is the probability that all the gas in a box accumulates in the same half of this box? Though amusing, this question underlies the fundamental problem of density fluctuations at equilibrium, which has profound implementations in many physical fields. The currently accepted solutions are derived from the studies of Brownian motion by Smoluchowski, but they are not appropriate for the directly colliding particles of gases. Two alternative theories are proposed here using self-regulatory Bernoulli distributions, which incorporate roles for crowding and pressure in counteracting density fluctuations. A quantum of space is first introduced to develop a mechanism of matter congestion holding for high densities. In a second mechanism valid in ordinary conditions, the influence of local pressure on the location of every particle is examined using classical laws of ideal gases. This approach reveals that a negative feedback results from the reciprocal influences between individual particles and the population of particles, which strongly reduces the probability of atypical microstates. Finally, a thermodynamic quantum of time is defined to compare the recurrence times of improbable macrostates predicted through these different approaches.

  20. Comparison of air displacement plethysmography to hydrostatic weighing for estimating total body density in children

    PubMed Central

    Claros, Geo; Hull, Holly R; Fields, David A

    2005-01-01

    Background The purpose of this study was to examine the accuracy of total body density and percent body fat (% fat) using air displacement plethysmography (ADP) and hydrostatic weighing (HW) in children. Methods Sixty-six male and female subjects (40 males: 12.4 ± 1.3 yrs, 47.4 ± 14.8 kg, 155.4 ± 11.9 cm, 19.3 ± 4.1 kg/m2; 26 females: 12.0 ± 1.9 yrs, 41.4 ± 7.7 kg, 152.1 ± 8.9 cm, 17.7 ± 1.7 kg/m2) were tested using ADP and HW with ADP always preceding HW. Accuracy, precision, and bias were examined in ADP with HW serving as the criterion method. Lohman's equations that are child specific for age and gender were used to convert body density to % fat. Regression analysis determined the accuracy of ADP and potential bias between ADP and HW using Bland-Altman analysis. Results For the entire group (Y = 0.835x + 0.171, R2 = 0.84, SEE = 0.007 g/cm3) and for the males (Y = 0.837x + 0.174, R2 = 0.90, SEE = 0.006 g/cm3) the regression between total body density by HW and by ADP significantly deviated from the line of identity. However in females, the regression between total body density by HW and ADP did not significantly deviate from the line of identity (Y = 0.750x + 0.258, R2 = 0.55, SEE = 0.008 g/cm3). The regression between % fat by HW and ADP for the group (Y = 0.84x + 3.81, R2 = 0.83, SEE = 3.35 % fat) and for the males (Y = 0.84x + 3.25, R2 = 0.90, SEE = 3.00 % fat) significantly deviated from the line of identity. However, in females the regression between % fat by HW and ADP did not significantly deviate from the line of identity (Y = 0.81x + 5.17, R2 = 0.56, SEE = 3.80 % fat). Bland-Altman analysis revealed no bias between HW total body density and ADP total body density for the entire group (R = 0.-22; P = 0.08) or for females (R = 0.02; P = 0.92), however bias existed in males (R = -0.37; P ≤ 0.05). Bland-Altman analysis revealed no bias between HW and ADP % fat for the entire group (R = 0.21; P = 0.10) or in females (R = 0.10; P = 0.57), however bias was indicated for males by a significant correlation (R = 0.36; P ≤ 0.05), with ADP underestimating % fat at lower fat values and overestimating at the higher % fat values. Conclusion A significant difference in total body density and % fat was observed between ADP and HW in children 10–15 years old with a potential gender difference being detected. Upon further investigation it was revealed that the study was inadequately powered, thus we recommend that larger studies that are appropriately powered be conducted to better understand this potential gender difference. PMID:16153297

  1. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    NASA Astrophysics Data System (ADS)

    Sarangi, B.; Aggarwal, S. G.; Sinha, D.; Gupta, P. K.

    2015-12-01

    In this work, we have used scanning mobility particle sizer (SMPS) and quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyzer (DMA), where size segregation was done based on particle electrical mobility. At the downstream of DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas other one is sent to QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of SMPS and mass concentration data obtained from QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10 to 478 nm), i.e. AS, SC and AN is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Among individual uncertainty components, repeatability of particle mass obtained by QCM, QCM crystal frequency, CPC counting efficiency, and equivalence of CPC and QCM derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of winter period in New Delhi is measured to be 1.28 ± 0.12 g cm-3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an important factor to govern this trend. It is further observed that the CMD has good correlation with O3, SO2 and ambient RH, suggesting that possibly sulfate secondary materials have substantial contribution in particle effective density. This approach can be useful for real-time measurement of effective density of both laboratory generated and ambient aerosol particles, which is very important for studying the physico-chemical property of particles.

  2. High density biomass estimation for wetland vegetation using WorldView-2 imagery and random forest regression algorithm

    NASA Astrophysics Data System (ADS)

    Mutanga, Onisimo; Adam, Elhadi; Cho, Moses Azong

    2012-08-01

    The saturation problem associated with the use of NDVI for biomass estimation in high canopy density vegetation is a well known phenomenon. Recent field spectroscopy experiments have shown that narrow band vegetation indices computed from the red edge and the NIR shoulder can improve the estimation of biomass in such situations. However, the wide scale unavailability of high spectral resolution satellite sensors with red edge bands has not seen the up-scaling of these techniques to spaceborne remote sensing of high density biomass. This paper explored the possibility of estimate biomass in a densely vegetated wetland area using normalized difference vegetation index (NDVI) computed from WorldView-2 imagery, which contains a red edge band centred at 725 nm. NDVI was calculated from all possible two band combinations of WorldView-2. Subsequently, we utilized the random forest regression algorithm as variable selection and a regression method for predicting wetland biomass. The performance of random forest regression in predicting biomass was then compared against the widely used stepwise multiple linear regression. Predicting biomass on an independent test data set using the random forest algorithm and 3 NDVIs computed from the red edge and NIR bands yielded a root mean square error of prediction (RMSEP) of 0.441 kg/m2 (12.9% of observed mean biomass) as compared to the stepwise multiple linear regression that produced an RMSEP of 0.5465 kg/m2 (15.9% of observed mean biomass). The results demonstrate the utility of WorldView-2 imagery and random forest regression in estimating and ultimately mapping vegetation biomass at high density - a previously challenging task with broad band satellite sensors.

  3. The implementation of binned Kernel density estimation to determine open clusters' proper motions: validation of the method

    NASA Astrophysics Data System (ADS)

    Priyatikanto, R.; Arifyanto, M. I.

    2015-01-01

    Stellar membership determination of an open cluster is an important process to do before further analysis. Basically, there are two classes of membership determination method: parametric and non-parametric. In this study, an alternative of non-parametric method based on Binned Kernel Density Estimation that accounts measurements errors (simply called BKDE- e) is proposed. This method is applied upon proper motions data to determine cluster's membership kinematically and estimate the average proper motions of the cluster. Monte Carlo simulations show that the average proper motions determination using this proposed method is statistically more accurate than ordinary Kernel Density Estimator (KDE). By including measurement errors in the calculation, the mode location from the resulting density estimate is less sensitive to non-physical or stochastic fluctuation as compared to ordinary KDE that excludes measurement errors. For the typical mean measurement error of 7 mas/yr, BKDE- e suppresses the potential of miscalculation by a factor of two compared to KDE. With median accuracy of about 93 %, BKDE- e method has comparable accuracy with respect to parametric method (modified Sanders algorithm). Application to real data from The Fourth USNO CCD Astrograph Catalog (UCAC4), especially to NGC 2682 is also performed. The mode of member stars distribution on Vector Point Diagram is located at μ α cos δ=-9.94±0.85 mas/yr and μ δ =-4.92±0.88 mas/yr. Although the BKDE- e performance does not overtake parametric approach, it serves a new view of doing membership analysis, expandable to astrometric and photometric data or even in binary cluster search.

  4. Estimating ŋ/s of QCD matter at high baryon densities

    NASA Astrophysics Data System (ADS)

    Karpenko, Iu.; Bleicher, M.; Huovinen, P.; Petersen, H.

    2016-01-01

    We report on the application of a cascade + viscous hydro + cascade model for heavy ion collisions in the RHIC Beam Energy Scan range, √snn = 6.3…200 GeV. By constraining model parameters to reproduce the data we find that the effective (average) value of the shear viscosity over entropy density ratio ŋ/s decreases from 0.2 to 0.08 when collision energy grows from √sNN ≈ 7 to 39 GeV.

  5. Estimates of Densities and Filling Factors from a Cooling Time Analysis of Solar Microflares Observed with RHESSI

    NASA Astrophysics Data System (ADS)

    Baylor, R. N.; Cassak, P. A.; Christe, S.; Hannah, I. G.; Krucker, Sm; Mullan, D. J.; Shay, M. A.; Hudson, H. S.; Lin, R. P.

    2011-07-01

    We use more than 4500 microflares from the RHESSI microflare data set to estimate electron densities and volumetric filling factors of microflare loops using a cooling time analysis. We show that if the filling factor is assumed to be unity, the calculated conductive cooling times are much shorter than the observed flare decay times, which in turn are much shorter than the calculated radiative cooling times. This is likely unphysical, but the contradiction can be resolved by assuming that the radiative and conductive cooling times are comparable, which is valid when the flare loop temperature is a maximum and when external heating can be ignored. We find that resultant radiative and conductive cooling times are comparable to observed decay times, which has been used as an assumption in some previous studies. The inferred electron densities have a mean value of 1011.6 cm-3 and filling factors have a mean of 10-3.7. The filling factors are lower and densities are higher than previous estimates for large flares, but are similar to those found for two microflares by Moore et al.

  6. Estimating the effective density of engineered nanomaterials for in vitro dosimetry

    PubMed Central

    DeLoid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Wang, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

    2014-01-01

    The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by bench-top centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174

  7. Winter wheat stand density determination and yield estimates from handheld and airborne scanners. [Montana

    NASA Technical Reports Server (NTRS)

    Aase, J. K.; Millard, J. P.; Siddoway, F. H. (Principal Investigator)

    1982-01-01

    Radiance measurements from handheld (Exotech 100-A) and air-borne (Daedalus DEI 1260) radiometers were related to wheat (Triticum aestivum L.) stand densities (simulated winter wheat winterkill) and to grain yield for a field located 11 km northwest of Sidney, Montana, on a Williams loam soil (fine-loamy, mixed Typic Argiborolls) where a semidwarf hard red spring wheat cultivar was needed to stand. Radiances were measured with the handheld radiometer on clear mornings throughout the growing season. Aircraft overflight measurements were made at the end of tillering and during the early stem extension period, and the mid-heading period. The IR/red ratio and normalized difference vegetation index were used in the analysis. The aircraft measurements corroborated the ground measurements inasmuch as wheat stand densities were detected and could be evaluated at an early enough growth stage to make management decision. The aircraft measurements also corroborated handheld measurements when related to yield prediction. The IR/red ratio, although there was some growth stage dependency, related well to yield when measured from just past tillering until about the watery-ripe stage.

  8. Improving accuracy and efficiency of mutual information for multi-modal retinal image registration using adaptive probability density estimation.

    PubMed

    Legg, P A; Rosin, P L; Marshall, D; Morgan, J E

    2013-01-01

    Mutual information (MI) is a popular similarity measure for performing image registration between different modalities. MI makes a statistical comparison between two images by computing the entropy from the probability distribution of the data. Therefore, to obtain an accurate registration it is important to have an accurate estimation of the true underlying probability distribution. Within the statistics literature, many methods have been proposed for finding the 'optimal' probability density, with the aim of improving the estimation by means of optimal histogram bin size selection. This provokes the common question of how many bins should actually be used when constructing a histogram. There is no definitive answer to this. This question itself has received little attention in the MI literature, and yet this issue is critical to the effectiveness of the algorithm. The purpose of this paper is to highlight this fundamental element of the MI algorithm. We present a comprehensive study that introduces methods from statistics literature and incorporates these for image registration. We demonstrate this work for registration of multi-modal retinal images: colour fundus photographs and scanning laser ophthalmoscope images. The registration of these modalities offers significant enhancement to early glaucoma detection, however traditional registration techniques fail to perform sufficiently well. We find that adaptive probability density estimation heavily impacts on registration accuracy and runtime, improving over traditional binning techniques. PMID:24054309

  9. Monthly river flow simulation with a joint conditional density estimation network

    NASA Astrophysics Data System (ADS)

    Li, Chao; Singh, Vijay P.; Mishra, Ashok K.

    2013-06-01

    River flow synthesizing and downscaling are required for the analysis of risks associated with water resources management plans and for regional impact studies of climate change. This paper presents a probabilistic model that synthesizes and downscales monthly river flow by estimating the joint distribution of flows of two adjacent months conditional on covariates. The covariates may consist of lagged and aggregated flow variables (synthesizing), exogenous climatic variables (downscaling), or combinations of these two types. The joint distribution is constructed by connecting two marginal distributions in terms of copulas. The relationship between covariates and distribution parameters is approximated by an artificial neural network, which is calibrated using the principle of maximum likelihood. Outputs of the neural network yield parameters of the joint distribution. From the estimated joint distribution, a conditional distribution of river flow of current month given the estimation of the previous month can be derived. Depending on the different types of covariate information, this conditional distribution may serve as the "engine" for synthesizing or downscaling river flow sequences. The idea of the proposed model is illustrated using three case studies. The first case deals with synthetic data and shows that the model is capable of fitting a nonstationary joint distribution. Second, the model is utilized to synthesize monthly river flow at four sample stations on the main stream of the Colorado River. Results reveal that the model reproduces essential evaluation statistics fairly well. Third, a simple illustrative example for river flow downscaling is presented. Analysis indicates that the model can be a viable option to downscale monthly river flow as well.

  10. Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil

    PubMed Central

    Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz

    2012-01-01

    After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547

  11. Individual movements and population density estimates for moray eels on a Caribbean coral reef

    NASA Astrophysics Data System (ADS)

    Abrams, R. W.; Schein, M. W.

    1986-12-01

    Observations of moray eel (Muraenidae) distribution made on a Caribbean coral reef are discussed in the context of long term population trends. Observations of eel distribution made using SCUBA during 1978, 1979 1980, and 1984 are compared and related to the occurrence of a hurricane in 1979. An estimate of the mean standing stock of moray eels is presented. The degree of site attachment is discussed for spotted morays ( Gymnothorax moringa) and goldentail morays ( Muraena miliaris). The repeated non-aggressive association of moray eels with large aggregations of potential prey fishes is detailed.

  12. Psychophysical estimates of visual pigment densities in red-green dichromats.

    PubMed

    Miller, S S

    1972-05-01

    1. The spectral sensitivity of red-green dichromats was determined using heterochromatic flicker photometric matches (25-30 c/s) on the fovea. These matches are upset after a bright bleach and consequently the spectral sensitivity is altered.2. Preliminary experiments indicate that under the conditions in which these experiments were performed, the blue cone mechanism of deuteranopes and protanopes cannot follow 20 c/s flicker. If dichromats lack one of the normal pigments then the upset of these matches monitors the change in spectral sensitivity of a single mechanism.3. After a bleach which removes all the cone pigments, the spectral sensitivity recovers with the time course of pigment kinetics as measured by densitometry.4. An intense background also changes the relative spectral sensitivity of the dichromats. On real equilibrium backgrounds, the changes in spectral sensitivity follow those predicted by the pigment changes measured by densitometry. The predicted changes are obtained by modifying the Rushton equilibrium equation to take into account the density of pigment.5. The relationship of these changes to the luminance of the background is independent of the colour of the background light.6. In contradistinction the effect is dependent on the colour of the lights which were flickered. These experiments indicate that a narrowing of the spectral sensitivity curves takes place on both sides of the dichromats' lambda(max).7. The change in relative spectral sensitivity as a function of background intensity was also determined by increment threshold measurements. These changes can be expressed in terms of deviations from Weber's law (DeltaI/I = const.) if DeltaI and I represent the number of chromophores destroyed by the test and background.8. The relative spectral sensitivity of the dichromat was changed by decentering the point of pupil entry. This upset was abolished by bleaching. The size of the upset was correlated with the magnitude of the S-C I effect.9. Given the hypothesis of pigment density (self-screening), the results of expts. (3)-(8) are consistent and allow the calculation of a maximum optical density for those pigments which underlie the dichromats' long-wave mechanism. For the deuteranope a D(lambdamax) of 0.5-0.6 is calculated and for the protanope a D(lambdamax) of 0.4-0.5 is obtained. PMID:4537944

  13. Modeling and estimation of production rate for the production phase of non-growth-associated high cell density processes.

    PubMed

    Jamilis, Martín; Garelli, Fabricio; Mozumder, Md Salatul Islam; Castañeda, Teresita; De Battista, Hernán

    2015-10-01

    This paper addresses the estimation of the specific production rate of intracellular products and the modeling of the bioreactor volume dynamics in high cell density fed-batch reactors. In particular, a new model for the bioreactor volume is proposed, suitable to be used in high cell density cultures where large amounts of intracellular products are stored. Based on the proposed volume model, two forms of a high-order sliding mode observer are proposed. Each form corresponds to the cases with residual biomass concentration or volume measurement, respectively. The observers achieve finite time convergence and robustness to process uncertainties as the kinetic model is not required. Stability proofs for the proposed observer are given. The observer algorithm is assessed numerically and experimentally. PMID:26149912

  14. Estimation of effective hydrologic properties of soils from observations of vegetation density

    NASA Technical Reports Server (NTRS)

    Tellers, T. E.; Eagleson, P. S.

    1980-01-01

    A one-dimensional model of the annual water balance is reviewed. Improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate soil system, is verified through comparisons with observed data. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides verification of the soil-selection procedure. This method of parameterization of the land surface is useful with global circulation models, enabling them to account for both the nonlinearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.

  15. Three-dimensional estimates of the coronal electron density at times of extreme solar activity

    NASA Astrophysics Data System (ADS)

    Butala, M. D.; Frazin, R. A.; Kamalabadi, F.

    2005-09-01

    This paper presents quantitative three-dimensional (3-D) reconstructions of the electron density (Ne) in the solar corona between 1.14 and 2.7 solar radii (R⊙) formed from polarized brightness (pB) measurements made by the Mauna Loa Solar Observatory Mark-IV (Mk4) K-coronameter at the time of the extreme solar events of October and November 2003. The 3-D reconstructions are made by a process called solar rotational tomography that exploits the view angles provided by solar rotation during a 2-week period. Although this method is incapable of resolving dynamic evolution on timescales of less than about 2 weeks, a qualitative comparison of the reconstructions to instantaneous Extreme ultraviolet Imaging Telescope images (EIT) shows good agreement between coronal holes, active regions, and "quiet Sun" structures on the disk and their counterparts in the corona at 1.2 R⊙.

  16. Decreased values of cosmic dust number density estimates in the Solar System

    NASA Astrophysics Data System (ADS)

    Willis, M. J.; Burchell, M. J.; Ahrens, T. J.; Krüger, H.; Grün, E.

    2005-08-01

    Experiments to investigate the effect of impacts on side-walls of dust detectors such as the present NASA/ESA Galileo/Ulysses instrument are reported. Side walls constitute 27% of the internal area of these instruments, and increase field of view from 140° to 180°. Impact of cosmic dust particles onto Galileo/Ulysses Al side walls was simulated by firing Fe particles, 0.5-5 μm diameter, 2-50 km s -1, onto an Al plate, simulating the targets of Galileo and Ulysses dust instruments. Since side wall impacts affect the rise time of the target ionization signal, the degree to which particle fluxes are overestimated varies with velocity. Side-wall impacts at particle velocities of 2-20 km s -1 yield rise times 10-30% longer than for direct impacts, so that derived impact velocity is reduced by a factor of ˜2. Impacts on side wall at 20-50 km s -1 reduced rise times by a factor of ˜10 relative to direct impact data. This would result in serious overestimates of flux of particles intersecting the dust instrument at velocities of 20-50 km s -1. Taking into account differences in laboratory calibration geometry we obtain the following percentages for previous overestimates of incident particle number density values from the Galileo instrument [Grün et al., 1992. The Galileo dust detector. Space Sci. Rev. 60, 317-340]: 55% for 2 km s -1 impacts, 27% at 10 km s -1 and 400% at 70 km s -1. We predict that individual particle masses are overestimated by ˜10-90% when side-wall impacts occur at 2-20 km s -1, and underestimated by ˜10-10 at 20-50 km s -1. We predict that wall impacts at 20-50 km s -1 can be identified in Galileo instrument data on account of their unusually short target rise times. The side-wall calibration is used to obtain new revised values [Krüger et al., 2000. A dust cloud of Ganymede maintained by hypervelocity impacts of interplanetary micrometeoroids. Planet. Space Sci. 48, 1457-1471; 2003. Impact-generated dust clouds surrounding the Galilean moons. Icarus 164, 170-187] of the Galilean satellite dust number densities of 9.4×10, 9.9×10, 4.1×10, and 6.8×10 m at 1 satellite radius from Io, Europa, Ganymede, and Callisto, respectively. Additionally, interplanetary particle number densities detected by the Galileo mission are found to be 1.6×10, 7.9×10, 3.2×10, 3.2×10, and 7.9×10 m at heliocentric distances of 0.7, 1, 2, 3, and 5 AU, respectively. Work by Burchell et al. [1999b. Acceleration of conducting polymer-coated latex particles as projectiles in hypervelocity impact experiments. J. Phys. D: Appl. Phys. 32, 1719-1728] suggests that low-density "fluffy" particles encountered by Ulysses will not significantly affect our results—further calibration would be useful to confirm this.