Sample records for wavelet-based density estimation

  1. Wavelet-based texture measures for semicontinuous stand density estimation from very high resolution optical imagery

    NASA Astrophysics Data System (ADS)

    van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Wulf, Robert R.

    2011-01-01

    Stand density, expressed as the number of trees per unit area, is an important forest management parameter. It is used by foresters to evaluate regeneration, to assess the effect of forest management measures, or as an indicator variable for other stand parameters like age, basal area, and volume. In this work, a new density estimation procedure is proposed based on wavelet analysis of very high resolution optical imagery. Wavelet coefficients are related to reference densities on a per segment basis, using an artificial neural network. The method was evaluated on artificial imagery and two very high resolution datasets covering forests in Heverlee, Belgium and Les Beaux de Provence, France. Whenever possible, the method was compared with the well-known local maximum filter. Results show good correspondence between predicted and true stand densities. The average absolute error and the correlation between predicted and true density was 149 trees/ha and 0.91 for the artificial dataset, 100 trees/ha and 0.85 for the Heverlee site, and 49 trees/ha and 0.78 for the Les Beaux de Provence site. The local maximum filter consistently yielded lower accuracies, as it is essentially a tree localization tool, rather than a density estimator.

  2. An efficient wavelet-based motion estimation algorithm

    NASA Astrophysics Data System (ADS)

    Bae, Jin-Woo; Lee, Seung-Hyun; Yoo, Ji-Sang

    2004-11-01

    In this paper, we propose a wavelet-based fast motion estimation algorithm for video sequence encoding with a low bit-rate. By using one of the properties of the wavelet transform, multi-resolution analysis (MRA), and the spatial interpolation of an image, we can simultaneously reduce the prediction error and the computational complexity inherent in video sequence encoding. In addition, by defining a significant block(SB) based on the differential information of the wavelet coefficients between successive frames, the proposed algorithm enables us to make up for the increase in the number of motion vectors when the MRME algorithm is used. As a result, we are not only able to improve the peak signal-to-noise ratio (PSNR), but also reduce the computational complexity by up to 67%.

  3. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  4. Wavelet-based image estimation: an empirical Bayes approach using Jeffrey's noninformative prior

    Microsoft Academic Search

    Mário A. T. Figueiredo; Robert D. Nowak

    2001-01-01

    The sparseness and decorrelation properties of the discrete wavelet transform have been exploited to develop powerful denoising methods. However, most of these methods have free parameters which have to be adjusted or estimated. In this paper, we propose a wavelet-based denoising technique without any free parameters; it is, in this sense, a \\

  5. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  6. Estimation of interband and intraband statistical dependencies in wavelet-based decomposition of meshes

    NASA Astrophysics Data System (ADS)

    Satti, Shahid M.; Denis, Leon; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2009-02-01

    This paper analyzes the statistical dependencies between wavelet coefficients in wavelet-based decompositions of 3D meshes. These dependencies are estimated using the interband, intraband and composite mutual information. For images, the literature shows that the composite and the intraband mutual information are approximat-ely equal, and they are both significantly larger than the interband mutual information. This indicates that intraband coding designs should be favored over the interband zerotree-based coding approaches, in order to better capture the residual dependencies between wavelet coefficients. This motivates the design of intraband wavelet-based image coding schemes, such as quadtree-limited (QT-L) coding, or the state-of-the-art JPEG-2000 scalable image coding standard. In this paper, we empirically investigate whether these findings hold in case of meshes as well. The mutual information estimation results show that, although the intraband mutual information is significantly larger than the interband mutual information, the composite case cannot be discarded, as the composite mutual information is also significantly larger than the intraband mutual information. One concludes that intraband and composite codec designs should be favored over the traditional interband zerotree-based coding approaches commonly followed in scalable coding of meshes.

  7. Wavelet-based linear-response time-dependent density-functional theory

    NASA Astrophysics Data System (ADS)

    Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.; Philouze, Christian; Balakirev, Maxim Y.

    2012-06-01

    Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BIGDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program DEMON2K for the calculation of electronic absorption spectra of N2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BIGDFT than for DEMON2K. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BIGDFT, while all virtual orbitals are included in TD-DFT calculations in DEMON2K. As a reality check, we report the X-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidazo[1, 2-a]pyridin-3-amine.

  8. Wavelet-based Estimation for Heteroskedasticity and Autocorrelation Consistent Variance-Covariance Matrices

    Microsoft Academic Search

    Yongmiao Hong; Jin Lee

    2000-01-01

    As is well-known, a heteroskedasticity and autocorrelation consistent covariance matrix is proportional to a spectral density matrix at frequency zero and can be consistently estimated by such popular kernel methods as those of Andrews-Newey-West. In practice, it is difficult to estimate the spectral density matrix if it has a peak at frequency zero, which can arise when there is strong

  9. Fetal QRS detection and heart rate estimation: a wavelet-based approach.

    PubMed

    Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

    2014-08-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. PMID:25070210

  10. A new wavelet based algorithm for estimating respiratory motion rate using UWB radar

    Microsoft Academic Search

    Mehran Baboli; Seyed Ali Ghorashi; Namdar Saniei; Alireza Ahmadian

    2009-01-01

    UWB signals have become attractive for their particular advantage of having narrow pulse width which makes them suitable for remote sensing of vital signals. In this paper a novel approach to estimate periodic motion rates, using ultra wide band (UWB) signals is proposed. The proposed algorithm which is based on wavelet transform is used as a non-contact tool for measurement

  11. Estimation of shock induced vorticity on irregular gaseous interfaces: a wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Ray, J.; Jameson, L.

    2005-11-01

    We study the interaction of a shock with a density-stratified gaseous interface (Richtmyer Meshkov instability) with localized jagged and irregular perturbations, with the aim of developing an analytical model of the vorticity deposition on the interface immediately after the passage of the shock. The jagged perturbations, meant to simulate machining errors on the surface of a laser fusion target, are characterized using Haar wavelets. Numerical solutions of the Euler equations show that the vortex sheet deposited on the jagged interface rolls into multiple mushroom-shaped dipolar structures which begin to merge before the interface evolves into a bubble-spike structure. The peaks in the distribution of x-integrated vorticity (vorticity integrated in the direction of the shock motion) decay in time as their bases widen, corresponding to the growth and merger of the mushrooms. However, these peaks were not seen to move significantly along the interface at early times i.e. t < 10 ?, where ? is the interface traversal time of the shock. We tested our analytical model against inviscid simulations for two test cases a Mach 1.5 shock interacting with an interface with a density ratio of 3 and a Mach 10 shock interacting with a density ratio of 10. We find that this model captures the early time (t/? ˜ 1) vorticity deposition (as characterized by the first and second moments of vorticity distributions) to within 5% of the numerical results.

  12. Wavelet-based estimation of the hemodynamic responses in diffuse optical imaging.

    PubMed

    Lina, J M; Matteau-Pelletier, C; Dehaes, M; Desjardins, M; Lesage, F

    2010-08-01

    Diffuse optical imaging uses light to provide a surrogate measure of neuronal activation through the hemodynamic responses. The relative low absorption of near-infrared light enables measurements of hemoglobin changes at depths reaching the first centimeter of the cortex. The rapid rate of acquisition and the access to both oxy and deoxy-hemoglobin leads to new challenges when trying to uncouple physiology from the signal of interest. In particular, recent work provided evidence of the presence of a 1/f noise structure in optical signals and showed that a general linear model based on wavelets can be used to decorrelate the structured noise and provide a superior estimator of response amplitude when compared with conventional techniques. In this work the wavelet techniques are extended to recover the full temporal shape of the hemodynamic responses. A comparison with other models is provided as well as a case study on finger-tapping data. PMID:20494609

  13. Wavelet-based Evapotranspiration Forecasts

    NASA Astrophysics Data System (ADS)

    Bachour, R.; Maslova, I.; Ticlavilca, A. M.; McKee, M.; Walker, W.

    2012-12-01

    Providing a reliable short-term forecast of evapotranspiration (ET) could be a valuable element for improving the efficiency of irrigation water delivery systems. In the last decade, wavelet transform has become a useful technique for analyzing the frequency domain of hydrological time series. This study shows how wavelet transform can be used to access statistical properties of evapotranspiration. The objective of the research reported here is to use wavelet-based techniques to forecast ET up to 16 days ahead, which corresponds to the LANDSAT 7 overpass cycle. The properties of the ET time series, both physical and statistical, are examined in the time and frequency domains. We use the information about the energy decomposition in the wavelet domain to extract meaningful components that are used as inputs for ET forecasting models. Seasonal autoregressive integrated moving average (SARIMA) and multivariate relevance vector machine (MVRVM) models are coupled with the wavelet-based multiresolution analysis (MRA) results and used to generate short-term ET forecasts. Accuracy of the models is estimated and model robustness is evaluated using the bootstrap approach.

  14. Stochastics and Statistics A wavelet-based spectral procedure for steady-state

    E-print Network

    Stochastics and Statistics A wavelet-based spectral procedure for steady-state simulation analysis online 27 June 2005 Abstract We develop WASSP, a wavelet-based spectral method for steady-state of the thresholded wavelet coefficients, WASSP computes estimators of the batch means log- spectrum and the steady-state

  15. Airborne Crowd Density Estimation

    NASA Astrophysics Data System (ADS)

    Meynberg, O.; Kuschk, G.

    2013-10-01

    This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

  16. Wavelet-based semblance filtering

    NASA Astrophysics Data System (ADS)

    Cooper, G. R. J.

    2009-10-01

    Fourier transform-based semblance analysis compares two time series on the basis of their phase as a function of frequency. This approach can be extended using wavelets to allow the phase comparison of two datasets to be performed as a function of both time and wavelength. This paper further extends the previous work in two directions; firstly it demonstrates how to display the correlation between multiple (not just two) datasets, and secondly it introduces wavelet-based semblance filtering which allows a pair of datasets to be processed to extract components with any degree of correlation. Matlab source code is available from the IAMG server at www.iamg.org.

  17. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  18. Shape constrained kernel density estimation

    Microsoft Academic Search

    Melanie Birke

    2009-01-01

    In this paper, a method for estimating monotone, convex and log-concave densities is proposed. The estimation procedure consists of an unconstrained kernel estimator which is modified in a second step with respect to the desired shape constraint by using monotone rearrangements. It is shown that the resulting estimate is a density itself and shares the asymptotic properties of the unconstrained

  19. FAST GEM WAVELET-BASED IMAGE DECONVOLUTION ALGORITHM

    Microsoft Academic Search

    M. B. Dias; Torre Norte

    2003-01-01

    The paper proposes a new wavelet-based Bayesian approach to image deconvolution, under the space-invariant blur and ad- ditive white Gaussian noise assumptions. Image deconvolution exploits the well known sparsity of the wavelet coefficients, de- scribed by heavy-tailed priors. The present approach admits any prior given by a linear (finite of infinite) combination of Gaussian densities. To compute the maximum a

  20. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  1. Wavelet-based modal analysis for time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, K.; Staszewski, W. J.; Uhl, T.

    2015-01-01

    The paper presents algorithms for modal identification of time-variant systems. These algorithms utilise the wavelet-based Frequency Response Function, and lead to estimation of all three modal parameters, i.e. natural frequencies, damping and mode shapes. The method presented utilises random impact excitation and signal post-processing based on the crazy climbers Algorithm. The method is validated using simulated and experimental data from vibration time-variant systems. The results show that the method captures correctly the dynamics of the analysed systems, leading to correct modal parameter identification.

  2. Wavelet Based Estimation for Univariate Stable Laws

    E-print Network

    Antoniadis, Anestis

    -LMC, University Joseph Fourier, BP 53, 38041 Grenoble Cedex 9, France Andrey Feuerverger Department of Statistics Avenue de l'Europe, 38330 Monbonnot Saint Martin, France. Abstract Stable distributions are characterized features of the processes being modeled. Hence, in the field of statistics for example, wavelets have been

  3. Wavelet Based Estimation for Univariate Stable Laws

    E-print Network

    Gonçalves, Paulo

    -LMC, University Joseph Fourier, BP 53, 38041 Grenoble Cedex 9, France Andrey Feuerverger Department of Statistics Avenue de l'Europe, 38330 Monbonnot Saint Martin, France. Abstract Stable distributions are characterized. Hence, in statistics for example, wavelets have been used primarily to deal with problems

  4. Density estimation for color images

    NASA Astrophysics Data System (ADS)

    Stokman, Harro M.; Gevers, Theo

    2001-01-01

    Color histograms computed from the normalized and hue color space are negatively affected by sensor noise due to the instability of these color space transforms at many RGB values. To suppress the effect of sensor noise, in this paper density estimations are computed using variable kernels. To that end, models are proposed for the propagation of sensor noise through the normalized and hue colors. As a result, not only the hue and normalized color values are known, but also the associated uncertainty. This twofold information is used to derive the parameterization of the variable kernel used for the density estimation. It is empirically verified that the proposed method compares favorable to the traditional histogram.

  5. Multivariate Density Estimation: An SVM Approach

    E-print Network

    Mukherjee, Sayan

    1999-04-01

    We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. ...

  6. Wavelet-based ultrasound image denoising: performance analysis and comparison.

    PubMed

    Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

  7. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  8. Discrimination of walking patterns using wavelet-based fractal analysis.

    PubMed

    Sekine, Masaki; Tamura, Toshiyo; Akay, Metin; Fujimoto, Toshiro; Togawa, Tatsuo; Fukui, Yasuhiro

    2002-09-01

    In this paper, we attempted to classify the acceleration signals for walking along a corridor and on stairs by using the wavelet-based fractal analysis method. In addition, the wavelet-based fractal analysis method was used to evaluate the gait of elderly subjects and patients with Parkinson's disease. The triaxial acceleration signals were measured close to the center of gravity of the body while the subject walked along a corridor and up and down stairs continuously. Signal measurements were recorded from 10 healthy young subjects and 11 elderly subjects. For comparison, two patients with Parkinson's disease participated in the level walking. The acceleration signal in each direction was decomposed to seven detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 7 to 1 were calculated. The fractal dimension of the acceleration signal was then estimated from the slope of the variance progression. The fractal dimensions were significantly different among the three types of walking for individual subjects (p < 0.01) and showed a high reproducibility. Our results suggest that the fractal dimensions are effective for classifying the walking types. Moreover, the fractal dimensions were significantly higher for the elderly subjects than for the young subjects (p < 0.01). For the patients with Parkinson's disease, the fractal dimensions tended to be higher than those of healthy subjects. These results suggest that the acceleration signals change into a more complex pattern with aging and with Parkinson's disease, and the fractal dimension can be used to evaluate the gait of elderly subjects and patients with Parkinson's disease. PMID:12503784

  9. Density Estimation Trees in High Energy Physics

    E-print Network

    Anderlini, Lucio

    2015-01-01

    Density Estimation Trees can play an important role in exploratory data analysis for multidimensional, multi-modal data models of large samples. I briefly discuss the algorithm, a self-optimization technique based on kernel density estimation, and some applications in High Energy Physics.

  10. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

  11. A wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Wang, Yuan 1Ji, Yanju 2Li, Suyi 13Lin, Jun 12Zhou, Fengdao 1Yang, Guihong

    2013-09-01

    A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an effective, practical method to remove the baseline drift of GREATEM signals and its performance is significantly superior to the interpolation method.

  12. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  13. Optimization of k nearest neighbor density estimates

    Microsoft Academic Search

    K. Fukunaga; L. Hostetler

    1973-01-01

    Nonparametric density estimation using thek-nearest-neighbor approach is discussed. By developing a relation between the volume and the coverage of a region, a functional form for the optimumkin terms of the sample size, the dimensionality of the observation space, and the underlying probability distribution is obtained. Within the class of density functions that can be made circularly symmetric by a linear

  14. NEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye

    E-print Network

    Devroye, Luc

    (k)d), and X(k) is the k-th nearest neighbor of x when points are ordered by increasing values of the product dNEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye School of Computer Science Mc Montreal, Canada H3G 1M8 Abstract. Let X be an IRd -valued random variable with unknown density f. Let X1

  15. A wavelet based investigation of long memory in stock returns

    NASA Astrophysics Data System (ADS)

    Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.

    2012-04-01

    Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.

  16. ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

    EPA Science Inventory

    An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

  17. Consistency of the local kernel density estimator

    Microsoft Academic Search

    Geof H. Givens

    1995-01-01

    The consistency of the local kernel density estimator is proved. This nonparametric estimator is distinguished by its use of scaling matrices which are random and which may vary for each sample point. Its applications include adaptive construction of importance sampling functions.

  18. NEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye

    E-print Network

    Devroye, Luc

    are ordered by increasing values of the product Q d j=1 |x j -X (k)j |, and k = o(log n), k ##. The auxiliaryNEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye School of Computer Science Mc Montreal, Canada H3G 1M8 Abstract. Let X be an IR d ­valued random variable with unknown density f . Let X

  19. Transformation based density estimation For weighted distributions

    Microsoft Academic Search

    Hammou El Barmi; Jeffrey S. Simonoff

    2000-01-01

    In this paper we consider the estimation of a density f on the basis of random sample from a weighted distribution G with density g given by ,where w(u) > 0 for all u and . A special case of this situation is that of length-biased sampling, where w(x) = x. In this paper we examine a simple transformation-based approach

  20. Estimating and Interpreting Probability Density Functions

    NSDL National Science Digital Library

    This 294-page document from the Bank for International Settlements stems from the Estimating and Interpreting Probability Density Functions workshop held on June 14, 1999. The conference proceedings, which may be downloaded as a complete document or by chapter, are divided into two sections: "Estimation Techniques" and "Applications and Economic Interpretation." Both contain papers presented at the conference. Also included are a list of the program participants with their affiliations and email addresses, a forward, and background notes.

  1. Calibrated Measures for Breast Density Estimation

    PubMed Central

    Heine, John J.; Cao, Ke; Rollison, Dana E.

    2011-01-01

    Rationale and Objectives Breast density is a significant breast cancer risk factor measured from mammograms. Evidence suggests that the spatial variation in mammograms may also be associated with risk. We investigated the variation in calibrated mammograms as a breast cancer risk factor and explored its relationship with other measures of breast density using full field digital mammography (FFDM). Materials and Methods A matched case-control analysis was used to assess a spatial variation breast density measure in calibrated FFDM images, normalized for the image acquisition technique variation. Three measures of breast density were compared between cases and controls: (a) the calibrated average measure, (b) the calibrated variation measure, and (c) the standard percentage of breast density (PD) measure derived from operator-assisted labeling. Linear correlation and statistical relationships between these three breast density measures were also investigated. Results Risk estimates associated with the lowest to highest quartiles for the calibrated variation measure were greater in magnitude [odds ratios: 1.0 (ref.), 3.5, 6.3, and 11.3] than the corresponding risk estimates for quartiles of the standard PD measure [odds ratios: 1.0 (ref.), 2.3, 5.6, and 6.5] and the calibrated average measure [odds ratios: 1.0 (ref.), 2.4, 2.3, and 4.4]. The three breast density measures were highly correlated, showed an inverse relationship with breast area, and related by a mixed distribution relationship. Conclusion The three measures of breast density capture different attributes of the same data field. These preliminary findings indicate the variation measure is a viable automated method for assessing breast density. Insights gained by this work may be used to develop a standard for measuring breast density. PMID:21371912

  2. Sampling, Density Estimation and Spatial Relationships

    NSDL National Science Digital Library

    Maggie Haag (University of Alberta; )

    1998-01-01

    This resource serves as a tool used for instructing a laboratory exercise in ecology. Students obtain hands-on experience using techniques such as, mark-recapture and density estimation and organisms such as, zooplankton and fathead minnows. This exercise is suitable for general ecology and introductory biology courses.

  3. PERSPECTIVES Estimating delayed density-dependent mortality

    E-print Network

    Myers, Ransom A.

    PERSPECTIVES Estimating delayed density-dependent mortality in sockeye salmon (Oncorhynchus nerka in many populations of sockeye salmon (Oncorhynchus nerka). We used a meta-analytical approach to test dans de nombreuses populations de saumon rouge (Oncorhynchus nerka). Les auteurs ont utilisé une

  4. Estimating density of Florida Key deer

    E-print Network

    Roberts, Clay Walton

    2006-08-16

    for this species since 1968; however, a need to evaluate the precision of existing and alternative survey methods (i.e., road counts, mark-recapture, infrared-triggered cameras [ITC]) was desired by USFWS. I evaluated density estimates from unbaited ITCs and road...

  5. On Sequential Data-Driven Density Estimation

    Microsoft Academic Search

    Sam Efromovich

    2004-01-01

    The theory and methods of minimax and sequential inferences, pioneered by Abraham Wald in 1940's, shaped the way statisticians see the statistics today. This article employs the Wald approaches together with the modern oracle analysis to develop the theory and methods of a sharp minimax adaptive sequential density estimation. In particular, it proves a long-standing conjecture about a sufficient condition

  6. IMPROVED DENSITY ESTIMATORS FOR INVERTIBLE LINEAR PROCESSES

    E-print Network

    Schick, Anton

    Sciences Binghamton University Binghamton, NY 13902-6000, USA anton@math.binghamton.edu Wolfgang Wefelmeyer Mathematisches Institut Universit¨at zu K¨oln Weyertal 86-90 50931 K¨oln, Germany wefelm@math.uni-koeln.de Key processes can be represented as a convolution of innovation-based densities, and it can be estimated

  7. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  8. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  9. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  10. Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures

    E-print Network

    Percival, Don

    Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures Don Percival Applied monitoring program recently upgraded with perma- nent installation of vertical profilers at Lake Wivenhoe dam in a subtropical dam as a function of time and depth · will concentrate on a 600+ day segment of temperature fluc

  11. A tree projection algorithm for wavelet-based

    E-print Network

    Thompson, Andrew

    A tree projection algorithm for wavelet-based sparse approximation Andrew Thompson Duke University, North Carolina, USA joint with Coralia Cartis (University of Edinburgh) #12;Wavelet trees · Discrete wavelet transforms (DWTs) have an inherent tree structure #12;Wavelet trees · Discrete wavelet transforms

  12. Wavelet-based Feature Extraction for Handwritten Numerals

    E-print Network

    Figueira, Santiago

    Wavelet-based Feature Extraction for Handwritten Numerals Diego Romero, Ana Ruedin and Leticia recognition, that relies on the extraction of multi- scale features to characterize the classes-dependent bandpass filters, and give information on local orientation of the strokes. Extracted features: · A shape

  13. Wavelet-based analysis of blood pressure dynamics in rats

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Anisimov, A. A.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznecova, A. S.; Matasova, E. G.

    2009-02-01

    Using a wavelet-based approach, we study stress-induced reactions in the blood pressure dynamics in rats. Further, we consider how the level of the nitric oxide (NO) influences the heart rate variability. Clear distinctions for male and female rats are reported.

  14. Wavelet based edge detection method for analysis of coronary angiograms

    Microsoft Academic Search

    A. Bezerianos; A. Munteanul; D. Alexopoulos; G. Panayiotakis; P. Cristea

    1995-01-01

    The assessment of coronary anatomy is one of the prime determinants in choosing medical or interventional therapy for patients with ischemic heart disease. We report a wavelet based method of coronary border identification which has the advantage of the detection of the edges at different scales (the image changes are computed in a variable neighborhood), unlike the conventional methods where

  15. Coding sequence density estimation via topological pressure.

    PubMed

    Koslicki, David; Thompson, Daniel J

    2015-01-01

    We give a new approach to coding sequence (CDS) density estimation in genomic analysis based on the topological pressure, which we develop from a well known concept in ergodic theory. Topological pressure measures the 'weighted information content' of a finite word, and incorporates 64 parameters which can be interpreted as a choice of weight for each nucleotide triplet. We train the parameters so that the topological pressure fits the observed coding sequence density on the human genome, and use this to give ab initio predictions of CDS density over windows of size around 66,000 bp on the genomes of Mus Musculus, Rhesus Macaque and Drososphilia Melanogaster. While the differences between these genomes are too great to expect that training on the human genome could predict, for example, the exact locations of genes, we demonstrate that our method gives reasonable estimates for the 'coarse scale' problem of predicting CDS density. Inspired again by ergodic theory, the weightings of the nucleotide triplets obtained from our training procedure are used to define a probability distribution on finite sequences, which can be used to distinguish between intron and exon sequences from the human genome of lengths between 750 and 5,000 bp. At the end of the paper, we explain the theoretical underpinning for our approach, which is the theory of Thermodynamic Formalism from the dynamical systems literature. Mathematica and MATLAB implementations of our method are available at http://sourceforge.net/projects/topologicalpres/ . PMID:24448658

  16. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

  17. Classification of Melanoma Lesions Using Wavelet-Based Texture Analysis

    Microsoft Academic Search

    Rahil Garnavi; Mohammad Aldeen; James Bailey

    2010-01-01

    This paper presents a wavelet-based texture analysis method for classification of melanoma. The method applies tree-structured wavelet transform on different color channels of red, green, blue and luminance of dermoscopy images, and employs various statistical measures and ratios on wavelet coefficients. Feature extraction and a two-stage feature selection method, based on entropy and correlation, were applied to a train set

  18. Wavelet-based statistical signal processing using hidden Markov models

    Microsoft Academic Search

    Matthew S. Crouse; Robert D. Nowak; Richard G. Baraniuk

    1998-01-01

    Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are

  19. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  20. Non-destructive wavelet-based despeckling in SAR images

    NASA Astrophysics Data System (ADS)

    Bekhtin, Yuri S.; Bryantsev, Andrey A.; Malebo, Damiao P.; Lupachev, Alexey A.

    2014-10-01

    The suggested wavelet-based despeckling method for multi-look SAR images does not use any thresholding and window processing to avoid ringing artifacts, blurring, fusion of edges, etc. Instead, the logical operation of comparison is applied to wavelet coefficients which are presented in spatial oriented trees (SOTs) of wavelet decomposition calculated for one and the same region of the earth surface during SAR spacecraft flight. Fusion of SAR images is decided by keeping the smallest wavelet coefficients from different SOTs in high frequency subbands (details). The wavelet coefficients related to the low frequency subband (approximation) are processed by another special logical operation providing with a good smoothing. It is because the described procedure depends on properties of the chosen wavelet basis then the library of wavelet bases is applied. The procedure is repeated for each wavelet basis. To select the best SOTs (and hence, the best wavelet basis) there is the special cost function which considers the SOTs as so-called coherent structures and shows which of wavelet bases brings the maximum entropy. The results of computer modeling and comparison with few well-known despeckling procedures have shown the superb quality of the proposed method in the sense of different criteria as PSNR, SSIM, etc.

  1. Estimating stellar mean density through seismic inversions

    NASA Astrophysics Data System (ADS)

    Reese, D. R.; Marques, J. P.; Goupil, M. J.; Thompson, M. J.; Deheuvels, S.

    2012-03-01

    Context. Determining the mass of stars is crucial both for improving stellar evolution theory and for characterising exoplanetary systems. Asteroseismology offers a promising way for estimating the stellar mean density. When combined with accurate radii determinations, such as are expected from Gaia, this yields accurate stellar masses. The main difficulty is finding the best way to extract the mean density of a star from a set of observed frequencies. Aims: We seek to establish a new method for estimating the stellar mean density, which combines the simplicity of a scaling law while providing the accuracy of an inversion technique. Methods: We provide a framework in which to construct and evaluate kernel-based linear inversions that directly yield the mean density of a star. We then describe three different inversion techniques (SOLA and two scaling laws) and apply them to the Sun, several test cases and three stars, ? Cen B, HD 49933 and HD 49385, two of which are observed by CoRoT. Results: The SOLA (subtractive optimally localised averages) approach and the scaling law based on the surface correcting technique described by Kjeldsen et al. (2008, ApJ, 683, L175) yield comparable results that can reach an accuracy of 0.5% and are better than scaling the large frequency separation. The reason for this is that the averaging kernels from the two first methods are comparable in quality and are better than what is obtained with the large frequency separation. It is also shown that scaling the large frequency separation is more sensitive to near-surface effects, but is much less affected by an incorrect mode identification. As a result, one can identify pulsation modes by looking for an ? and n assignment which provides the best agreement between the results from the large frequency separation and those from one of the two other methods. Non-linear effects are also discussed, as is the effects of mixed modes. In particular, we show that mixed modes bring little improvement to the mean density estimates because of their poorly adapted kernels.

  2. Multiresolution seismic data fusion with a generalized wavelet-based method to derive subseabed acoustic properties

    NASA Astrophysics Data System (ADS)

    Ker, S.; Le Gonidec, Y.; Gibert, D.

    2013-11-01

    In the context of multiscale seismic analysis of complex reflectors, that takes benefit from broad-band frequency range considerations, we perform a wavelet-based method to merge multiresolution seismic sources based on generalized Lévy-alpha stable functions. The frequency bandwidth limitation of individual seismic sources induces distortions in wavelet responses (WRs), and we show that Gaussian fractional derivative functions are optimal wavelets to fully correct for these distortions in the merged frequency range. The efficiency of the method is also based on a new wavelet parametrization, that is the breadth of the wavelet, where the dominant dilation is adapted to the wavelet formalism. As a first demonstration to merge multiresolution seismic sources, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed device and we show that both can now be perfectly merged into an equivalent seismic source with a broad-band frequency bandwidth (220-2200 Hz). Taking advantage of this new multiresolution seismic data fusion, the potential of the generalized wavelet-based method allows reconstructing the acoustic impedance profile of the subseabed, based on the inverse wavelet transform properties extended to the source-corrected WR. We highlight that the fusion of seismic sources improves the resolution of the impedance profile and that the density structure of the subseabed can be assessed assuming spatially homogeneous large scale features of the subseabed physical properties.

  3. Traffic characterization and modeling of wavelet-based VBR encoded video

    SciTech Connect

    Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  4. ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER

    EPA Science Inventory

    This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...

  5. A Maximum Likelihood Approach to Density Estimation with Semidefinite Programming

    Microsoft Academic Search

    Tadayoshi Fushiki; Shingo Horiuchi; Takashi Tsuchiya

    2006-01-01

    Density estimation plays an important and fundamental role in pattern recognition, machine learning, and statistics. In this article, we develop a parametric approach to univariate (or low-dimensional) density estimation based on semidefinite programming (SDP). Our density model is expressed as the product of a nonnegative polynomial and a base density such as normal distribution, exponential distribution, and uniform distribution. When

  6. Wavelet based characterization of ex vivo vertebral trabecular bone structure with 3T MRI compared to microCT

    SciTech Connect

    Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

    2005-04-11

    Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

  7. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  8. Bounding the L1 Distance in Nonparametric Density Estimation

    Microsoft Academic Search

    Subrata Kundu; Adam T. Martinsek

    1997-01-01

    Let X1, X2, ..., Xn be i.i.d. random variables with common unknown density function f. We are interested in estimating the unknown density f with bounded Mean Integrated Absolute Error (MIAE). Devroye and Gyorfi (1985, Nonparametric Density Estimation: The L1 View, Wiley, New York) obtained asymptotic bounds for the MIAE in estimating f by a kernel estimate fn. Using these

  9. Probability Density Estimation from Optimally Condensed Data Samples

    Microsoft Academic Search

    Mark Girolami; Chao He

    2003-01-01

    The requirement to reduce the computational cost of evaluating a point probability density estimate when employing a Parzen window estimator is a well-known problem. This paper presents the Reduced Set Density Estimator that provides a kernel- based density estimator which employs a small percentage of the available data sample and is optimal in the L2 sense. While only requiringOÖN 2Ü

  10. Wavelet-Based Signal and Image Processing for Target Recognition

    NASA Astrophysics Data System (ADS)

    Sherlock, Barry G.

    2002-11-01

    The PI visited NSWC Dahlgren, VA, for six weeks in May-June 2002 and collaborated with scientists in the G33 TEAMS facility, and with Marilyn Rudzinsky of T44 Technology and Photonic Systems Branch. During this visit the PI also presented six educational seminars to NSWC scientists on various aspects of signal processing. Several items from the grant proposal were completed, including (1) wavelet-based algorithms for interpolation of 1-d signals and 2-d images; (2) Discrete Wavelet Transform domain based algorithms for filtering of image data; (3) wavelet-based smoothing of image sequence data originally obtained for the CRITTIR (Clutter Rejection Involving Temporal Techniques in the Infra-Red) project. The PI visited the University of Stellenbosch, South Africa to collaborate with colleagues Prof. B.M. Herbst and Prof. J. du Preez on the use of wavelet image processing in conjunction with pattern recognition techniques. The University of Stellenbosch has offered the PI partial funding to support a sabbatical visit in Fall 2003, the primary purpose of which is to enable the PI to develop and enhance his expertise in Pattern Recognition. During the first year, the grant supported publication of 3 referred papers, presentation of 9 seminars and an intensive two-day course on wavelet theory. The grant supported the work of two students who functioned as research assistants.

  11. ESTIMATION OF MUSCLE ACTIVITY USING PROBABILITY DENSITY FUNCTIONS

    E-print Network

    ........................................................................................... 12 2.1 EMG Data Acquisition....................................................................................................... 14 Chapter 3: EMG Posterior Probability Density Function Estimation Using Bayes' Theorem ....................................................... 19 Chapter 4: EMG and Kinematic Data Processing

  12. Kalman's Shrinkage for Wavelet-Based Despeckling of SAR Images

    Microsoft Academic Search

    BAYESIAN DENOISING; Mario Mastriani; Alberto E. Giraldez

    2006-01-01

    In this paper, a new probability density function (pdf) is proposed to model the statistics of wavelet coefficients, and a simple Kalman's filter is derived from the new pdf using Bayesian estimation theory. Specifically, we decompose the speckled image into wavelet subbands, we apply the Kalman's filter to the high subbands, and reconstruct a despeckled image from the modified detail

  13. ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY

    E-print Network

    Jamieson, Bruce

    ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY Daehyun Kim 1 and hardness of dry snow layers for common grain types. These relations have been widely used to estimate), and to estimate the hardness of layers in snowpack evolution models. Since 2000, the database of snow layers has

  14. ESTIMATING ABUNDANCE AND DENSITY: ADDITIONAL METHODS

    E-print Network

    Krebs, Charles J.

    ............................................................................. 132 Several methods have been developed for population estimation in which the organisms need. These methods were first developed in the 1940s for wildlife and fisheries management to get estimates set of methods are of much more recent development and are based on the principle of resighting

  15. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  16. Probability density function (pdf) estimation using isocontours/isosurfaces

    E-print Network

    Escolano, Francisco

    1 #12; Probability density function (pdf) estimation using isocontours/isosurfaces Application to Image Registration Application to Image Filtering Circular/spherical density estimation in Euclidean-width/bandwidth/number of components Bias/variance tradeoff: large bandwidth: high bias, low bandwidth: high variance) Sample

  17. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  18. An adaptive composite density estimator for k -tree sampling

    Microsoft Academic Search

    Steen Magnussen; Lutz Fehrman; William J. Platt

    Density estimators for k-tree distance sampling are sensitive to the amount of extra Poisson variance in distances to the kth tree. To lessen this sensitivity, we propose an adaptive composite estimator (COM). In simulated sampling from 16 test\\u000a populations, a three-component composite density estimator (COM)–with weights determined by a multinomial logistic function\\u000a of four readily available ancillary variables–was identified as

  19. Morphology driven density distribution estimation for small bodies

    NASA Astrophysics Data System (ADS)

    Takahashi, Yu; Scheeres, D. J.

    2014-05-01

    We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field environment in close proximity to small body’s surface for accurate trajectory/safe navigation purposes to be used for future missions.

  20. Simulating from the posterior density of Bayesian wavelet regression estimates

    E-print Network

    Barber, Stuart

    Simulating from the posterior density of Bayesian wavelet regression estimates STUART BARBER;k, and updated by the observed data y to form a posterior for each coefficient. We then estimate each coefficient by the median of its posterior distribution and the inverse DWT is used upon the resulting estimates to form

  1. Neutral wind estimation from 4-D ionospheric electron density images

    Microsoft Academic Search

    S. Datta-Barua; G. S. Bust; G. Crowley; N. Curtis

    2009-01-01

    We develop a new inversion algorithm for Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE method uses four-dimensional images of global electron density to estimate the field-aligned neutral wind ionospheric driver when direct measurement is not available. We begin with a model of the electron continuity equation that includes production and loss rate estimates, as well as E

  2. Density estimation from the sonic log: A case study

    SciTech Connect

    DiSiena, J.P.; Hilterman, F.J. [Geophysical Development Corp., Houston, TX (United States)

    1994-12-31

    In this case study, the authors estimate the bulk densities which would be measured by the density log in a well. They base this estimate on the sonic log, a derived lithology log and velocity-density trend curves. Two published methods based on Gardner et al.`s (1974) relationship and an alternate approach that utilizes an areal trend analysis are evaluated. In comparison with the observed density, the Gardner`s relationship underpredicts the shale density and overpredicts the sand density. A modification of Gardner`s equation (Castagna et al., 1993), which utilizes different coefficients for each lithology, produces a better estimate. However, the results vary from well to well. A local data base within their study area provides an empirical calibration to improve upon the Gardner-type relationships for this area. Approximately 1,000 square miles with 50 wells make up their study area in offshore Louisiana, centered on South Marsh island Block 106. These logs constitute a local data base for determining trends in velocity and density for a two component lithology of sand and shale. The authors identify a linear relationship between the density and logarithm of velocity for both sand and shale. Mixing the sand and shale relationships based on their volume lithologic fractions, they arrive at their density estimate. In comparison to the modified Gardner`s method, a comparable to better estimate of the densities is obtained. Furthermore, the linear relationship allows for easy fine-tuning of the local density prediction. If a portion of the well has a density log, they can calibrate the relationships for the remainder of the well. These results show a remarkable fit to the density curve, with errors of less than 2%. When discrepancies are evident, the predicted curve can be used to edit other logs or to indicate the presence of gas.

  3. Nonparametric density estimation in presence of bias and censoring

    Microsoft Academic Search

    E. Brunel; F. Comte; A. Guilloux

    2009-01-01

    We consider projection estimator methods for the nonparametric estimation of the density of i.i.d. biased observations with\\u000a a general known bias function w and under right censoring. Adaptive procedures to catch the optimal estimator among a collection by contrast penalization\\u000a are investigated and proved to give efficient estimators with optimal nonparametric rates of convergence. Monte-Carlo experiments\\u000a complete the study and

  4. Baseline wander correction in pulse waveforms using wavelet-based cascaded adaptive filter.

    PubMed

    Xu, Lisheng; Zhang, David; Wang, Kuanquan; Li, Naimin; Wang, Xiaoyun

    2007-05-01

    Pulse diagnosis is a convenient, inexpensive, painless, and non-invasive diagnosis method. Quantifying pulse diagnosis is to acquire and record pulse waveforms by a set of sensor firstly, and then analyze these pulse waveforms. However, respiration and artifact motion during pulse waveform acquisition can introduce baseline wander. It is necessary, therefore, to remove the pulse waveform's baseline wander in order to perform accurate pulse waveform analysis. This paper presents a wavelet-based cascaded adaptive filter (CAF) to remove the baseline wander of pulse waveform. To evaluate the level of baseline wander, we introduce a criterion: energy ratio (ER) of pulse waveform to its baseline wander. If the ER is more than a given threshold, the baseline wander can be removed only by cubic spline estimation; otherwise it must be filtered by, in sequence, discrete Meyer wavelet filter and the cubic spline estimation. Compared with traditional methods such as cubic spline estimation, morphology filter and Linear-phase finite impulse response (FIR) least-squares-error digital filter, the experimental results on 50 simulated and 500 real pulse signals demonstrate the power of CAF filter both in removing baseline wander and in preserving the diagnostic information of pulse waveforms. This CAF filter also can be used to remove the baseline wander of other physiological signals, such as ECG and so on. PMID:16930579

  5. Maximum likelihood estimation of a multivariate log-concave density

    E-print Network

    Cule, Madeleine

    2010-01-12

    . Density estimation is often one stage in a more complicated statistical procedure. With this in mind, we show how the estimator may be used for plug-in estimation of statistical functionals. A second important extension is the use of log-concave components... of Section 1.2.1 some restrictions are necessary to ensure that the density does not get too “spiky”. Shape-constrained maximum likelihood inference was first introduced by Grenander (1956) in the context of estimating mortality under the assumption...

  6. Wavelet-based face verification for constrained platforms

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2005-03-01

    Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.

  7. Wavelet-based laser-induced ultrasonic inspection in pipes

    NASA Astrophysics Data System (ADS)

    Baltazar-López, Martín E.; Suh, Steve; Chona, Ravinder; Burger, Christian P.

    2006-02-01

    The feasibility of detecting localized defects in tubing using Wavelet based laser-induced ultrasonic-guided waves as an inspection method is examined. Ultrasonic guided waves initiated and propagating in hollow cylinders (pipes and/or tubes) are studied as an alternative, robust nondestructive in situ inspection method. Contrary to other traditional methods for pipe inspection, in which contact transducers (electromagnetic, piezoelectric) and/or coupling media (submersion liquids) are used, this method is characterized by its non-contact nature. This characteristic is particularly important in applications involving Nondestructive Evaluation (NDE) of materials because the signal being detected corresponds only to the induced wave. Cylindrical guided waves are generated using a Q-switched Nd:YAG laser and a Fiber Tip Interferometry (FTI) system is used to acquire the waves. Guided wave experimental techniques are developed for the measurement of phase velocities to determine elastic properties of the material and the location and geometry of flaws including inclusions, voids, and cracks in hollow cylinders. As compared to the traditional bulk wave methods, the use of guided waves offers several important potential advantages. Some of which includes better inspection efficiency, the applicability to in-situ tube inspection, and fewer evaluation fluctuations with increased reliability.

  8. Experimental and numerical evaluation of wavelet based damage detection methodologies

    NASA Astrophysics Data System (ADS)

    Quiñones, Mireya M.; Montejo, Luis A.; Jang, Shinae

    2015-03-01

    This article presents an evaluation of the capabilities of wavelet-based methodologies for damage identification in civil structures. Two different approaches were evaluated: (1) analysis of the structure frequencies evolution by means of the continuous wavelet transform and (2) analysis of the singularities generated in the high frequency response of the structure through the detail functions obtained via fast wavelet transform. The methodologies were evaluated using experimental and numerical simulated data. It was found that the selection of appropriate wavelet parameters is critical for a successful analysis of the signal. Wavelet parameters should be selected based on the expected frequency content of the signal and desired time and frequency resolutions. Identifications of frequency shifts via ridge extraction of the wavelet map were successful in most of the experimental and numerical scenarios investigated. Moreover, the frequency shift can be inferred most of the time but the exact time at which it occurs is not evident. However, this information can be retrieved from the spike location from the Fast Wavelet Transform analysis. Therefore, it is recommended to perform both type of analysis and look at the results together.

  9. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  10. A neural and morphological method for wavelet-based image compression

    Microsoft Academic Search

    W. T. de Almeida Filho; A. D. Doria Neto; A. M. Brito Junior

    2002-01-01

    Image compression using the wavelet transform has several advantages over other transform methods. However, wavelet-based compression methods require not only the encoding of the significant coefficients, but also of their positions within the image. The paper presents a wavelet-based image compression method where the significance map is pre-processed using mathematical morphology techniques to create clusters of significant coefficients. It is

  11. A novel wavelet-based finite element method for the analysis of rotor-bearing systems

    Microsoft Academic Search

    Jiawei Xiang; Dongdi Chen; Xuefeng Chen; Zhengjia He

    2009-01-01

    The rotor dynamic theory, combined with finite element method, has been widely used over the last three decades in order to calculate the dynamic parameters in rotor-bearing systems. Since the wavelet-based elements offer multi-scale models, particularly in modeling complex systems, the wavelet-based rotating shaft elements are constructed to model rotor-bearing systems. The effects of translational and rotatory inertia, the gyroscopic

  12. Unbiased estimators of wildlife population densities using aural information

    E-print Network

    Durland, Eric Newton

    1969-01-01

    UNBIASED ESTIMATORS OF WILDLIFE POPULATION DENSITIES USING AURAL INFORMATION A Thesis by ERIC NEWTON DURLAND Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree MASTER OF SCIENCE... May 1969 Ma]or Sub]ect: Statistics UNBIASED ESTIMATORS OF WILDLIFE POPULATION DENSITIES USING AURAL INFORMATION A Thesis by ERIC NEWTON DURLAND Approved as to sty1e and content by: (Chairm n of gommittee) Head of Departmen (Member) (Memb r...

  13. Evaluation of wolf density estimation from radiotelemetry data

    USGS Publications Warehouse

    Burch, J.W.; Adams, L.G.; Follmann, E.H.; Rexstad, E.A.

    2005-01-01

    Density estimation of wolves (Canis lupus) requires a count of individuals and an estimate of the area those individuals inhabit. With radiomarked wolves, the count is straightforward but estimation of the area is more difficult and often given inadequate attention. The population area, based on the mosaic of pack territories, is influenced by sampling intensity similar to the estimation of individual home ranges. If sampling intensity is low, population area will be underestimated and wolf density will be inflated. Using data from studies in Denali National Park and Preserve, Alaska, we investigated these relationships using Monte Carlo simulation to evaluate effects of radiolocation effort and number of marked packs on density estimation. As the number of adjoining pack home ranges increased, fewer relocations were necessary to define a given percentage of population area. We present recommendations for monitoring wolves via radiotelemetry.

  14. Incorporating prior knowledge into nonparametric conditional density estimation

    Microsoft Academic Search

    Peter Krauthausen; Masoud Roschani; Uwe D. Hanebeck

    2011-01-01

    In this paper, the problem of sparse nonpara- metric conditional density estimation based on samples and prior knowledge is addressed. The prior knowledge may be restricted to parts of the state space and given as generative models in form of mean-function constraints or as probabilistic models in the form of Gaussian mixture densities. The key idea is the introduction of

  15. Asymptotic Equivalence of Density Estimation and Gaussian White Noise

    E-print Network

    Nussbaum, Michael

    Asymptotic Equivalence of Density Estimation and Gaussian White Noise Michael Nussbaum Weierstrass Institute, Berlin September 1995 Abstract Signal recovery in Gaussian white noise with variance tending with density f is globally asymptotically equivalent to a white noise experiment with drift f1/2 and variance 1

  16. MODEL-BASED CLUSTERING, DISCRIMINANT ANALYSIS, AND DENSITY ESTIMATION

    E-print Network

    Washington at Seattle, University of

    MODEL-BASED CLUSTERING, DISCRIMINANT ANALYSIS, AND DENSITY ESTIMATION by Chris Fraley Adrian E Seattle, Washington 98195 USA #12;#12;Model-Based Clustering, Discriminant Analysis, and Density.stat.washington.edu/fraley www.stat.washington.edu/raftery #12;Abstract Cluster analysis is the automated search for groups

  17. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  18. A wavelet-based approach to face verification/recognition

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah; Sellahewa, Harin

    2005-10-01

    Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

  19. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

  20. Atmospheric Density Corrections Estimated from Fitted Drag Coefficients

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T. F.; Mance, S. R.; Mehta, P.

    2010-12-01

    Fitted drag coefficients estimated using GEODYN, the NASA Goddard Space Flight Center Precision Orbit Determination and Geodetic Parameter Estimation Program, are used to create density corrections. The drag coefficients were estimated for Stella, Starlette and GFZ using satellite laser ranging (SLR) measurements; and for GEOSAT Follow-On (GFO) using SLR, Doppler, and altimeter crossover measurements. The data analyzed covers years ranging from 2000 to 2004 for Stella and Starlette, 2000 to 2002 and 2005 for GFO, and 1995 to 1997 for GFZ. The drag coefficient was estimated every eight hours. The drag coefficients over the course of a year show a consistent variation about the theoretical and yearly average values that primarily represents a semi-annual/seasonal error in the atmospheric density models used. The atmospheric density models examined were NRLMSISE-00 and MSIS-86. The annual structure of the major variations was consistent among all the satellites for a given year and consistent among all the years examined. The fitted drag coefficients can be converted into density corrections every eight hours along the orbit of the satellites. In addition, drag coefficients estimated more frequently can provide a higher frequency of density correction.

  1. Non-local crime density estimation incorporating housing information.

    PubMed

    Woodworth, J T; Mohler, G O; Bertozzi, A L; Brantingham, P J

    2014-11-13

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H(1) Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  2. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  3. Kernel density estimation of a multidimensional efficiency profile

    NASA Astrophysics Data System (ADS)

    Poluektov, A.

    2015-02-01

    Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the ?0b ? D0p?? decay.

  4. Kernel density estimation of a multidimensional efficiency profile

    E-print Network

    Anton Poluektov

    2014-11-20

    Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the $\\Lambda_b^0\\to D^0p\\pi$ decay.

  5. Density estimation using KNN and a potential model

    NASA Astrophysics Data System (ADS)

    Lu, Yonggang; Qiao, Jiangang; Liao, Li; Yang, Wuyang

    2013-10-01

    Density-based clustering methods are usually more adaptive than other classical methods in that they can identify clusters of various shapes and can handle noisy data. A novel density estimation method is proposed using both the knearest neighbor (KNN) graph and a hypothetical potential field of the data points to capture the local and global data distribution information respectively. An initial density score computed using KNN is used as the mass of the data point in computing the potential values. Then the computed potential is used as the new density estimation, from which the final clustering result is derived. All the parameters used in the proposed method are determined from the input data automatically. The new clustering method is evaluated by comparing with K-means++, DBSCAN, and CSPV. The experimental results show that the proposed method can determine the number of clusters automatically while producing competitive clustering results compared to the other three methods.

  6. Quantiles, parametric-select density estimation, and bi-information parameter estimators

    NASA Technical Reports Server (NTRS)

    Parzen, E.

    1982-01-01

    A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.

  7. Density-ratio robustness in dynamic state estimation

    NASA Astrophysics Data System (ADS)

    Benavoli, Alessio; Zaffalon, Marco

    2013-05-01

    The filtering problem is addressed by taking into account imprecision in the knowledge about the probabilistic relationships involved. Imprecision is modelled in this paper by a particular closed convex set of probabilities that is known with the name of density ratio class or constant odds-ratio (COR) model. The contributions of this paper are the following. First, we shall define an optimality criterion based on the squared-loss function for the estimates derived from a general closed convex set of distributions. Second, after revising the properties of the density ratio class in the context of parametric estimation, we shall extend these properties to state estimation accounting for system dynamics. Furthermore, for the case in which the nominal density of the COR model is a multivariate Gaussian, we shall derive closed-form solutions for the set of optimal estimates and for the credible region. Third, we discuss how to perform Monte Carlo integrations to compute lower and upper expectations from a COR set of densities. Then we shall derive a procedure that, employing Monte Carlo sampling techniques, allows us to propagate in time both the lower and upper state expectation functionals and, thus, to derive an efficient solution of the filtering problem. Finally, we empirically compare the proposed estimator with the Kalman filter. This shows that our solution is more robust to the presence of modelling errors in the system and that, hence, appears to be a more realistic approach than the Kalman filter in such a case.

  8. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  9. Estimating the spectrum of a density matrix with LOCC

    E-print Network

    Manuel A. Ballester

    2006-02-01

    The problem of estimating the spectrum of a density matrix is considered. Other problems, such as bipartite pure state entanglement, can be reduced to spectrum estimation. A local operations and classical communication (LOCC) measurement strategy is shown which is asymptotically optimal. This means that, for a very large number of copies, it becomes unnecessary to perform collective measurements which should be more difficult to implement in practice.

  10. An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments

    PubMed Central

    Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro

    2013-01-01

    In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

  11. Density estimation in tiger populations: combining information for strong inference

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  12. Improved Fast Gauss Transform and Efficient Kernel Density Estimation

    Microsoft Academic Search

    Changjiang Yang; Ramani Duraiswami; Nail A. Gumerov; Larry S. Davis

    2003-01-01

    Abstract Evaluating sums of multivariate Gaussians is a common computational task in computer vision and pattern recogni - tion, including in the general and powerful kernel density estimation technique The quadratic computational com - plexity of the summation is a significant barrier to the scal - ability of this algorithm to practical applications The fast Gauss transform (FGT) has successfully

  13. Contributed Paper Estimating the Density of Honeybee Colonies across

    E-print Network

    Paxton, Robert

    Contributed Paper Estimating the Density of Honeybee Colonies across Their Natural Range to Fill, University of Pretoria, Pretoria 0002, South Africa §Honeybee Research Section, ARC-Plant Protection Research, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because

  14. Adaptive density estimation for directional data using needlets

    E-print Network

    P. Baldi; G. Kerkyacharian; D. Marinucci; D. Picard

    2008-07-31

    This paper is concerned with density estimation of directional data on the sphere. We introduce a procedure based on thresholding on a new type of spherical wavelets called {\\it needlets}. We establish a minimax result and prove its optimality. We are motivated by astrophysical applications, in particular in connection with the analysis of ultra high energy cosmic rays.

  15. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  16. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

  17. The Effect of Lidar Point Density on LAI Estimation

    NASA Astrophysics Data System (ADS)

    Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

    2013-12-01

    Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful in forest inventory applications and tree health evaluations. However, such high-density data is often not available over large areas. In this study we progressively downsampled the high-density discrete lidar data and evaluated the effect on LAI estimation. The AccuPAR data was used as validation and results were compared to existing LAI metrics. This will enable us to determine the minimum point density required for airborne lidar LAI retrieval. Preliminary results show that the data may be substantially thinned to estimate site-level LAI. More detailed results will be presented at the conference.

  18. Transformation-based density estimation for weighted distributions

    Microsoft Academic Search

    Hammou El Barmi; Jeffrey S. Simonoff

    1999-01-01

    In this paper we consider the estimation of a density f on the basis of random sample from a weighted distribution G with density g given by g(x) = w(x)f(x)\\/ �µw, where w(u)>0 for all u and �µw = â�« w(u)f(u)du < â��. A special case of this situation is that of length-biased sampling, where w(x) = x. In this

  19. Estimating electric current densities in solar active regions

    E-print Network

    Wheatland, M S

    2015-01-01

    Electric currents in solar active regions are thought to provide the energy released via magnetic reconnection in solar flares. Vertical electric current densities $J_z$ at the photosphere may be estimated from vector magnetogram data, subject to substantial uncertainties. The values provide boundary conditions for nonlinear force- free modelling of active region magnetic fields. A method is presented for estimating values of $J_z$ taking into account uncertainties in vector magnetogram field values, and minimizing $J_z^2$ across the active region. The method is demonstrated using the boundary values of the field for a force-free twisted bipole, with the addition of noise at randomly chosen locations.

  20. Estimating Electric Current Densities in Solar Active Regions

    NASA Astrophysics Data System (ADS)

    Wheatland, M. S.

    2015-04-01

    Electric currents in solar active regions are thought to provide the energy released via magnetic reconnection in solar flares. Vertical electric current densities J z at the photosphere may be estimated from vector magnetogram data, subject to substantial uncertainties. The values provide boundary conditions for nonlinear force-free modelling of active region magnetic fields. A method is presented for estimating values of J z taking into account uncertainties in vector magnetogram field values, and minimising Jz2 across the active region. The method is demonstrated using the boundary values of the field for a force-free twisted bipole, with the addition of noise at randomly chosen locations.

  1. WAVELET-BASED FOVEATED IMAGE QUALITY MEASUREMENT FOR REGION OF INTEREST IMAGE CODING

    E-print Network

    Wang, Zhou

    resolution images. These metrics are not appropriate for the assessment of ROI coded images, where space-variant is that the human visual system (HVS) is highly space-variant in sampling, coding, processing and understandingWAVELET-BASED FOVEATED IMAGE QUALITY MEASUREMENT FOR REGION OF INTEREST IMAGE CODING Zhou Wang1

  2. Adapted Convex Optimization Algorithm for Wavelet-Based Dynamic PET Reconstruction

    E-print Network

    Paris-Sud XI, Université de

    1 Adapted Convex Optimization Algorithm for Wavelet-Based Dynamic PET Reconstruction Nelly Abstract--This work deals with Dynamic Positron Emission Tomography (PET) data reconstruction, considering. The effectiveness of this approach is shown with simulated dynamic PET data. Comparative results are also provided

  3. Wavelet-Based Nonlinear Multiscale Decomposition Model for Electricity Load Forecasting

    E-print Network

    Murtagh, Fionn

    that is not fully utilized. On the other hand, a forecast that is too low may lead to some revenue loss from sales1 Wavelet-Based Nonlinear Multiscale Decomposition Model for Electricity Load Forecasting D Company (NEMMCO). KEYWORDS: Wavelet transform, load forecast, scale, resolution, time series

  4. Wavelet-based feature extraction using probabilistic finite state automata for pattern classification$

    E-print Network

    Ray, Asok

    Wavelet-based feature extraction using probabilistic finite state automata for pattern Probabilistic finite state automata a b s t r a c t Real-time data-driven pattern classification requires (e.g., probabilistic finite state automata (PFSA)) capture the relevant information, embedded

  5. Multiresolution analysis on zero-dimensional Abelian groups and wavelets bases

    SciTech Connect

    Lukomskii, Sergei F [Saratov State University, Saratov (Russian Federation)

    2010-06-29

    For a locally compact zero-dimensional group (G,+{sup .}), we build a multiresolution analysis and put forward an algorithm for constructing orthogonal wavelet bases. A special case is indicated when a wavelet basis is generated from a single function through contractions, translations and exponentiations. Bibliography: 19 titles.

  6. Wavelet-based method to disentangle transcription- and replication-associated strand asymmetries in mammalian genomes

    Microsoft Academic Search

    Antoine Baker; Samuel Nicolay; Lamia Zaghloul; Yves d'Aubenton-Carafa; Claude Thermes; Benjamin Audit; Alain Arneodo

    2010-01-01

    During genome evolution, the two strands of the DNA double helix are not subjected to the same mutation patterns. This mutation bias is considered as a by-product of replicative and transcriptional activities. In this paper, we develop a wavelet-based methodology to analyze the DNA strand asymmetry profiles with the specific goal to extract the contributions associated with replication and transcription

  7. A WAVELET-BASED PATTERN RECOGNITION ALGORITHM TO CLASSIFY POSTURAL TRANSITIONS IN HUMANS

    E-print Network

    Boyer, Edmond

    A WAVELET-BASED PATTERN RECOGNITION ALGORITHM TO CLASSIFY POSTURAL TRANSITIONS IN HUMANS Anthony and workers in institutions equipped to care of elderly people. To prevent overpopulation problems, researcher to detect and reproduce move- ments of a part of the human body (a limb for instance) with uses in virtual

  8. A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images

    E-print Network

    Rajpoot, Nasir

    A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images Hammad of samples is an im- portant problem in the domain of histological image classification. This issue is inherent to the field due to the high complexity of histology im- age data. A technique that provides good

  9. Evaluation of a new wavelet-based compression algorithm for synthetic aperture radar images

    Microsoft Academic Search

    Jun Tian; Haitao Guo; Raymond O. Wells; C. Sidney Burrus; Jan E. Odegard

    1996-01-01

    In this paper we will discuss the performance of a new wavelet based embedded compression algorithm on synthetic aperture radar (SAR) image data. This new algorithm uses index coding on the indices of the discrete wavelet transform of the image data and provides an embedded code to successively approximate it. Results on compressing still images, medical images as well as

  10. Wavelet-Based Functional Mixed Model Analysis: Computation Considerations Richard C. Herrick and Jeffrey S. Morris

    E-print Network

    Morris, Jeffrey S.

    pancreatic cancer experiment. Blood serum was taken from 139 pancreatic cancer patients and 117 controls. Anderson Cancer Center, Houston, TX Abstract Wavelet-based Functional Mixed Models (WFMM) is a new Bayesian either A375P human melanoma or PC3MM2 prostate cancer cell lines were implanted in either the brain

  11. Some Bayesian statistical techniques useful in estimating frequency and density

    USGS Publications Warehouse

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  12. Estimating black bear density using DNA data from hair snares

    USGS Publications Warehouse

    Gardner, B.; Royle, J.A.; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  13. Volume estimation of multi-density nodules with thoracic CT

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas

    2014-03-01

    The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.

  14. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  15. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  16. A projection and density estimation method for knowledge discovery.

    PubMed

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  17. A Functional EM Algorithm for Mixing Density Estimation via Nonparametric Penalized Likelihood Maximization

    Microsoft Academic Search

    Lei Liu; Michael Levine; Yu Zhu

    2009-01-01

    When the true mixing density is known to be continuous, the maximum likelihood estimate of the mixing density does not provide a satisfying answer due to its degeneracy. Estimation of mixing densities is a well-known ill-posed indirect problem. In this article, we propose to esti- mate the mixing density by maximizing a penalized likelihood and call the resulting estimate the

  18. Effect of Random Clustering on Surface Damage Density Estimates

    SciTech Connect

    Matthews, M J; Feit, M D

    2007-10-29

    Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

  19. Estimation of probability densities using scale-free field theories

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  20. A comparison of plotless density estimators using Monte Carlo simulation on totally enumerated field data sets

    Microsoft Academic Search

    Neil A White; Richard M Engeman; Robert T Sugihara; Heather W Krupa

    2008-01-01

    BACKGROUND: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error

  1. Estimation of Volumetric Breast Density from Digital Mammograms

    NASA Astrophysics Data System (ADS)

    Alonzo-Proulx, Olivier

    Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.

  2. Fast medical image mixture density clustering segmentation using stratification sampling and kernel density estimation

    Microsoft Academic Search

    Cong-Hua Xie; Yu-Qing Song; Jian-Mei Chen

    2011-01-01

    The Gaussian mixture models (GMMs) is a flexible and powerful density clustering tool. However, the application of it to medical\\u000a image segmentation faces some difficulties. First, estimation of the number of components is still an open question. Second,\\u000a the speed of it for large medical image is slow. Moreover, GMMs has the problem of noise sensitivity. In this paper, the

  3. Estimating Kendall's tau for bivariate interval censored data with a smooth estimate of the density

    Microsoft Academic Search

    Emmanuel Lesare; Kris Bogaerts

    Measures of association for bivariate interval censored data have not yet been studied extensively. Betensky and Finkelstein(3) proposed to calculate Kendall's coecient of concordance using a multiple imputation technique. However, this method is quite computer intensive. Our approach is based on two steps. First, we fit a bivariate smooth estimate of the density of log-event times on a fixed grid.

  4. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  5. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    NASA Astrophysics Data System (ADS)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbie?, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  6. Wavelet-based efficient simulation of electromagnetic transients in a lightning protection system

    Microsoft Academic Search

    Guido Ala; Maria L. Di Silvestre; Elisa Francomano; Adele Tortorici

    2003-01-01

    In this paper, a wavelet-based efficient simulation of electromagnetic transients in a lightning protection systems (LPS) is presented. The analysis of electromagnetic transients is carried out by employing the thin-wire electric field integral equation in frequency domain. In order to easily handle the boundary conditions of the integral equation, semiorthogonal compactly supported spline wavelets, constructed for the bounded interval [0,1],

  7. Wavelet-based Contourlet Coding Using an SPIHT-like Algorithm

    Microsoft Academic Search

    Ramin Eslami; Hayder Radha

    In this paper, we propose a new non-linear image approximation method that decomposes images both radially and angularly. Our approximation is based on two stages of filter banks that are non-redundant and perfect reconstruction and therefore lead to an overall non-redundant and perfect reconstruction transform. We show that this transform, which we call it the Wavelet-Based Contourlet Transform (WBCT), is

  8. Wavelet-based image restoration for compact X-ray microscopy

    Microsoft Academic Search

    H. Stollberg; J. Boutet De Monvel; A. Holmberg; H. M. Hertz

    2003-01-01

    Summary Compact water-window X-ray microscopy with short expo- sure times will always be limited on photons owing to sources of limited power in combination with low-efficency X-ray optics. Thus, it is important to investigate methods for improv- ing the signal-to-noise ratio in the images. We show that a wavelet-based denoising procedure significantly improves the quality and contrast in compact X-ray

  9. Wavelet-based fuzzy reasoning approach to power-quality disturbance recognition

    Microsoft Academic Search

    T. X. Zhu; S. K. Tso; K. L. Lo

    2004-01-01

    This paper proposes a wavelet-based extended fuzzy reasoning approach to power-quality disturbance recognition and identification. To extract power-quality disturbance features, the energy distribution of the wavelet part at each decomposition level is introduced and its calculation mathematically established. Based on these features, rule bases are generated for extended fuzzy reasoning. The power-quality disturbance features are finally mapped into a real

  10. Digital implementation of a wavelet-based event detector for cardiac pacemakers

    Microsoft Academic Search

    Joachim Neves Rodrigues; Thomas Olsson; Leif Sörnmo; Viktor Öwall

    2005-01-01

    This paper presents a digital hardware implementation of a novel wavelet-based event detector suitable for the next generation of cardiac pacemakers. Significant power savings are achieved by introducing a second operation mode that shuts down 2\\/3 of the hardware for long time periods when the pacemaker patient is not exposed to noise, while not degrading performance. Due to a 0.13-?m

  11. VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images

    Microsoft Academic Search

    Damon M. Chandler; Sheila S. Hemami

    2007-01-01

    This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking

  12. Density estimation on multivariate censored data with optional Pólya tree

    PubMed Central

    Seok, Junhee; Tian, Lu; Wong, Wing H.

    2014-01-01

    Analyzing the failure times of multiple events is of interest in many fields. Estimating the joint distribution of the failure times in a non-parametric way is not straightforward because some failure times are often right-censored and only known to be greater than observed follow-up times. Although it has been studied, there is no universally optimal solution for this problem. It is still challenging and important to provide alternatives that may be more suitable than existing ones in specific settings. Related problems of the existing methods are not only limited to infeasible computations, but also include the lack of optimality and possible non-monotonicity of the estimated survival function. In this paper, we proposed a non-parametric Bayesian approach for directly estimating the density function of multivariate survival times, where the prior is constructed based on the optional Pólya tree. We investigated several theoretical aspects of the procedure and derived an efficient iterative algorithm for implementing the Bayesian procedure. The empirical performance of the method was examined via extensive simulation studies. Finally, we presented a detailed analysis using the proposed method on the relationship among organ recovery times in severely injured patients. From the analysis, we suggested interesting medical information that can be further pursued in clinics. PMID:23902636

  13. Multispectral Remote Sensing Image Classification Using Wavelet Based Features

    Microsoft Academic Search

    Saroj K. Meher; Bhavan Uma Shankar; Ashish Ghosh

    Multispectral remotely sensed images composed information over a large range of variation on frequencies (information) and\\u000a these frequencies change over different regions (irregular or frequency variant behavior of the signal) which need to be estimated\\u000a properly for an improved classification [1, 2, 3]. Multispectral remote sensing (RS) image data are basically complex in nature,\\u000a which have both spectral features with

  14. Lithological discrimination using a Wavelet Based Fractal Analysis at the Teapot Dome Field, Wyoming-USA

    NASA Astrophysics Data System (ADS)

    García, Alejandro; Aldana, Milagrosa; Cabrera, Ana

    2013-04-01

    In this work, we have applied a Wavelet Based Fractal Analysis (WBFA) to well logs and seismic data at the Teapot Dome Field, Natrona Country, Wyoming-USA, trying to characterize a reservoir using fractal parameters, as intercept (b), slope (m) and fractal dimension (D), and to correlate them with the sedimentation processes and/or the lithological characteristics of the area. The WBFA was first applied to the available logs (Gamma Ray, Spontaneous Potential, Density, Neutron Porosity and Deep Resistivity) from 20 wells located at sectors 27, 28, 33 and 34 of the 3D seismic of the Teapot Dome field. Also the WBFA was applied to the calculated curve of water saturation (Sw). At a second step, the method was used to analyze a set of seismic traces close to the studied wells, extracted from the 3D seismic data. Maps of the fractal parameters were obtained. A spectral analysis of the seismic data was also performed in order to identify seismic facies and to establish a possible correlation with the fractal results. The WBFA results obtained for the wells logs indicate a correlation between fractal parameters and the lithological content in the studied interval (i.e. top-base of the Frontier Formation). Particularly, for the Gamma Ray logs the fractal dimension D can be correlated with the sand-shale content: values of D lower than 0.9 are observed for those wells with more sand content (sandy wells); values of D between 0.9 and 1.1 correspond to wells where the sand packs present numerous inter-bedded shale layers (sandy-shale wells); finally, wells with more shale content (shaly wells) have D values greater than 1.1. The analysis of the seismic traces allowed the discrimination of shaly from sandy zones. The D map generated for the seismic traces indicates that this value can be associated with the shale content in the area. The iso-frequency maps obtained from the seismic spectral analysis show trends associated to the lithology of the field. These trends are similar to those observed in the maps of the fractal parameters, indicating that both analyses respond to lithological and/or sedimentation features in the area.

  15. Comparative study of different wavelet based neural network models for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

    2014-07-01

    The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

  16. WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering

    PubMed Central

    Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin

    2012-01-01

    Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. Availability: WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. Contact: statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa PMID:22328784

  17. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  18. Wavelet-Based Real-Time Diagnosis of Complex Systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Mackey, Ryan

    2003-01-01

    A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

  19. An Adaptive Background Subtraction Method Based on Kernel Density Estimation

    PubMed Central

    Lee, Jeisung; Park, Mignon

    2012-01-01

    In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

  20. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    NASA Technical Reports Server (NTRS)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  1. Wavelet-based stereo images reconstruction using depth images

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2007-09-01

    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance measures.

  2. Fluorescence diffuse optical tomography: a wavelet-based model reduction

    NASA Astrophysics Data System (ADS)

    Frassati, Anne; DaSilva, Anabela; Dinten, Jean-Marc; Georges, Didier

    2007-07-01

    Fluorescence diffuse optical tomography is becoming a powerful tool for the investigation of molecular events in small animal studies for new therapeutics developments. Here, the stress is put on the mathematical problem of the tomography, that can be formulated in terms of an estimation of physical parameters appearing as a set of Partial Differential Equations (PDEs). The Finite Element Method has been chosen here to resolve the diffusion equation because it has no restriction considering the geometry or the homogeneity of the system. It is nonetheless well-known to be time and memory consuming, mainly because of the large dimensions of the involved matrices. Our principal objective is to reduce the model in order to speed up the model computation. For that, a new method based on a multiresolution technique is chosen. All the matrices appearing in the discretized version of the PDEs are projected onto an orthonormal wavelet basis, and reduced according to the multiresolution method. With the first order resolution, this compression leads to the reduction of a factor 2x2 of the initial dimension, the inversion of the matrices is approximately 4 times faster. A validation study on a phantom was conducted to evaluate the feasibility of this reduction method.

  3. Atmospheric turbulence mitigation using complex wavelet-based fusion.

    PubMed

    Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R

    2013-06-01

    Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359

  4. Wavelet-based coherence measures of global seismic noise properties

    NASA Astrophysics Data System (ADS)

    Lyubushin, A. A.

    2015-04-01

    The coherent behavior of four parameters characterizing the global field of low-frequency (periods from 2 to 500 min) seismic noise is studied. These parameters include generalized Hurst exponent, multifractal singularity spectrum support width, the normalized entropy of variance, and kurtosis. The analysis is based on the data from 229 broadband stations of GSN, GEOSCOPE, and GEOFON networks for a 17-year period from the beginning of 1997 to the end of 2013. The entire set of stations is subdivided into eight groups, which, taken together, provide full coverage of the Earth. The daily median values of the studied noise parameters are calculated in each group. This procedure yields four 8-dimensional time series with a time step of 1 day with a length of 6209 samples in each scalar component. For each of the four 8-dimensional time series, a multiple correlation measure is estimated, which is based on computing robust canonical correlations for the Haar wavelet coefficients at the first detail level within a moving time window of the length 365 days. These correlation measures for each noise property demonstrate essential increasing starting from 2007 to 2008 which was continued till the end of 2013. Taking into account a well-known phenomenon of noise correlation increasing before catastrophes, this increasing of seismic noise synchronization is interpreted as indicators of the strongest (magnitudes not less than 8.5) earthquakes activation which is observed starting from the Sumatra mega-earthquake of 26 Dec 2004. This synchronization continues growing up to the end of the studied period (2013), which can be interpreted as a probable precursor of the further increase in the intensity of the strongest earthquakes all over the world.

  5. A novel ultrasound methodology for estimating spine mineral density.

    PubMed

    Conversano, Francesco; Franchini, Roberto; Greco, Antonio; Soloperto, Giulia; Chiriacò, Fernanda; Casciaro, Ernesto; Aventaggiato, Matteo; Renna, Maria Daniela; Pisani, Paola; Di Paola, Marco; Grimaldi, Antonella; Quarta, Laura; Quarta, Eugenio; Muratore, Maurizio; Laugier, Pascal; Casciaro, Sergio

    2015-01-01

    We investigated the possible clinical feasibility and accuracy of an innovative ultrasound (US) method for diagnosis of osteoporosis of the spine. A total of 342 female patients (aged 51-60 y) underwent spinal dual X-ray absorptiometry and abdominal echographic scanning of the lumbar spine. Recruited patients were subdivided into a reference database used for US spectral model construction and a study population for repeatability and accuracy evaluation. US images and radiofrequency signals were analyzed via a new fully automatic algorithm that performed a series of spectral and statistical analyses, providing a novel diagnostic parameter called the osteoporosis score (O.S.). If dual X-ray absorptiometry is assumed to be the gold standard reference, the accuracy of O.S.-based diagnoses was 91.1%, with k = 0.859 (p < 0.0001). Significant correlations were also found between O.S.-estimated bone mineral densities and corresponding dual X-ray absorptiometry values, with r(2) values up to 0.73 and a root mean square error of 6.3%-9.3%. The results obtained suggest that the proposed method has the potential for future routine application in US-based diagnosis of osteoporosis. PMID:25438845

  6. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  7. ENVIRONMENTAL AUDITING: Demonstration of Line Transect Methodologies to Estimate Urban Gray Squirrel Density

    PubMed

    Hein

    1997-11-01

    / Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% CI = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.KEY WORDS: Bias; Density; Distance sampling; Gray squirrel; Line transect; Sciurus carolinensis. PMID:9336490

  8. On the analysis of wavelet-based approaches for print mottle artifacts

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed H.; Cooper, Brian E.

    2014-01-01

    Print mottle is one of several attributes described in ISO/IEC DTS 24790, a draft technical specification for the measurement of image quality for monochrome printed output. It defines mottle as aperiodic fluctuations of lightness less than about 0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC 13660. In a previous publication, we introduced a modification to the ISO/IEC 13660 mottle measurement algorithm that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations including those introduced by print grain artifacts. This modification has improved the algorithm's correlation with the subjective evaluation of experts who rated the severity of printed mottle artifacts. Seeking to improve upon the mottle algorithm in ISO/IEC 13660, the ISO 24790 committee evaluated several mottle metrics. This led to the selection of the above wavelet-based approach as the top candidate algorithm for inclusion in a future ISO/IEC standard. Recent experimental results from the ISO committee showed higher correlation between the wavelet-based approach and the subjective evaluation conducted by the ISO committee members based upon 25 samples covering a variety of printed mottle artifacts. In addition, we introduce an alternative approach for measuring mottle defects based on spatial frequency analysis of wavelet- filtered images. Our goal is to establish a link between the spatial-based mottle (ISO/IEC DTS 24790) approach and its equivalent frequency-based one in light of Parseval's theorem. Our experimental results showed a high correlation between the spatial and frequency based approaches.

  9. Iterated denoising and fusion to improve the image quality of wavelet-based coding

    NASA Astrophysics Data System (ADS)

    Song, Beibei

    2011-06-01

    An iterated denoising and fusion method is presented to improve the image quality of wavelet-based coding. Firstly, iterated image denoising is used to reduce ringing and staircase noise along curving edges and improve edge regularity. Then, we adopt wavelet fusion method to enhance image edges, protect non-edge regions and decrease blurring artifacts during the process of denoising. Experimental results have shown that the proposed scheme is capable of improving both the subjective and the objective performance of wavelet decoders, such as JPEG2000 and SPIHT.

  10. Optimal block boundary pre/postfiltering for wavelet-based image and video compression.

    PubMed

    Liang, Jie; Tu, Chengjie; Tran, Trac D

    2005-12-01

    This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467

  11. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  12. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  13. Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors

    E-print Network

    Paris-Sud XI, Université de

    Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors Fabienne Comte, Tabea of nonparametric density estimation in the pile-up model. Adaptive nonparametric estimators are proposed for the pile-up model in its simple form as well as in the case of additional measurement errors. Furthermore

  14. Probability Density Estimation using Isocontours and Isosurfaces: Application to Information Theoretic

    E-print Network

    Banerjee, Arunava

    1 Probability Density Estimation using Isocontours and Isosurfaces: Application to Information]. A required component of all information theoretic techniques in image registration is a good estimator to noisy, sparse density estimates (variance) whereas too large a bin width introduces oversmoothing (bias

  15. A sampling unit for estimating gall densities of Paradiplosis tumifex (Diptera: Cecidomyiidae) in

    E-print Network

    Heard, Stephen B.

    for evaluating densities of balsam gall midge, Paradiplosis tumifex Gagne´ (Diptera: Cecidomyiidae), and its gall midge, Paradiplosis tumifex Gagne´ (Diptera: Cecidomyiidae), is a major Christmas tree pestA sampling unit for estimating gall densities of Paradiplosis tumifex (Diptera: Cecidomyiidae

  16. How Bandwidth Selection Algorithms Impact Exploratory Data Analysis Using Kernel Density Estimation

    E-print Network

    Harpole, Jared Kenneth

    2013-05-31

    Exploratory data analysis (EDA) is important, yet often overlooked in the social and behavioral sciences. Graphical analysis of one's data is central to EDA. A viable method of estimating and graphing the underlying density in EDA is kernel density...

  17. Demonstration of line transect methodologies to estimate urban gray squirrel density

    SciTech Connect

    Hein, E.W. [Los Alamos National Lab., NM (United States)] [Los Alamos National Lab., NM (United States)

    1997-11-01

    Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.

  18. Automatic Diagnosis of Abnormal Tumor Region from Brain Computed Tomography Images Using Wavelet Based Statistical Texture Features

    E-print Network

    Padma, A

    2011-01-01

    The research work presented in this paper is to achieve the tissue classification and automatically diagnosis the abnormal tumor region present in Computed Tomography (CT) images using the wavelet based statistical texture analysis method. Comparative studies of texture analysis method are performed for the proposed wavelet based texture analysis method and Spatial Gray Level Dependence Method (SGLDM). Our proposed system consists of four phases i) Discrete Wavelet Decomposition (ii) Feature extraction (iii) Feature selection (iv) Analysis of extracted texture features by classifier. A wavelet based statistical texture feature set is derived from normal and tumor regions. Genetic Algorithm (GA) is used to select the optimal texture features from the set of extracted texture features. We construct the Support Vector Machine (SVM) based classifier and evaluate the performance of classifier by comparing the classification results of the SVM based classifier with the Back Propagation Neural network classifier(BPN...

  19. Tropical forests and the global carbon cycle: Estimating state and change in biomass density. Book chapter

    SciTech Connect

    Brown, S.

    1996-07-01

    This chapter discusses estimating the biomass density of forest vegetation. Data from inventories of tropical Asia and America were used to estimate biomass densities. Efforts to quantify forest disturbance suggest that population density, at subnational scales, can be used as a surrogate index to encompass all the anthropogenic activities (logging, slash-and-burn agriculture, grazing) that lead to degradation of tropical forest biomass density.

  20. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  1. Estimation of Parent Specific DNA Copy Number in Tumors using High-Density Genotyping Arrays

    E-print Network

    Zhang, Nancy R.

    Estimation of Parent Specific DNA Copy Number in Tumors using High-Density Genotyping Arrays Hao the current high density genotyping platforms. The proposed method does not require matched normal samples, and can estimate the unknown genotypes simultaneously with the parent specific copy number. The new method

  2. MobSampling: V2V Communications for Traffic Density Estimation

    E-print Network

    Fiore, Marco

    to monitor CO2 emissions in different areas of a metropolitan region. The paper is organized as followsMobSampling: V2V Communications for Traffic Density Estimation Laura Garelli, Claudio Casetti estimation of vehicle traffic density. Our approach envisions vehicles communicating within a VANET

  3. Adaptive quadratic functional estimation of a weighted density by model selection

    Microsoft Academic Search

    Athanasia Petsa; Theofanis Sapatinas

    2010-01-01

    We consider the problem of estimating the integral of the square of a probability density function f on the basis of a random sample from a weighted distribution. Specifically, using model selection via a penalized criterion, an adaptive estimator for ? f based on weighted data is proposed for probability density functions which are uniformly bounded and belong to certain

  4. On the analysis of wavelet-based approaches for print grain artifacts

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed H.; Cooper, Brian E.; Rippetoe, Edward E.

    2013-01-01

    Grain is one of several attributes described in ISO/IEC TS 24790, a technical specification for the measurement of image quality for monochrome printed output. It defines grain as aperiodic fluctuations of lightness greater than 0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC 13660. Since this definition places no bounds on the upper frequency range, higher-frequency fluctuations (such as those from the printer's halftone pattern) could contribute significantly to the measurement of grain artifacts. In a previous publication, we introduced a modification to the ISO/IEC 13660 grain measurement algorithm that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations. This modification improves the algorithm's correlation with the subjective evaluation of experts who rated the severity of printed grain artifacts. Seeking to improve upon the grain algorithm in ISO/IEC 13660, the ISO/IEC TS 24790 committee evaluated several graininess metrics. This led to the selection of the above wavelet-based approach as the top candidate algorithm for inclusion in a future ISO/IEC standard. Our recent experimental results showed r2 correlation of 0.9278 between the wavelet-based approach and the subjective evaluation conducted by the ISO committee members based upon 26 samples covering a variety of printed grain artifacts. On the other hand, our experiments on the same data set showed much lower correlation (r2 = 0.3555) between the ISO/IEC 13660 approach and the same subjective evaluation of the ISO committee members. In addition, we introduce an alternative approach for measuring grain defects based on spatial frequency analysis of wavelet-filtered images. Our goal is to establish a link between the spatial-based grain (ISO/IEC TS 24790) approach and its equivalent frequency-based one in light of Parseval's theorem. Our experimental results showed r2 correlation near 0.99 between the spatial and frequency-based approaches.

  5. A density-dependent model of Cirsium vulgare population dynamics using field-estimated parameter values

    Microsoft Academic Search

    M. Gillman; J. M. Bullock; J. Silvertown; B. Clear Hill

    1993-01-01

    Two versions of a stage-structured model of Cirsium vulgare population dynamics were developed. Both incorporated density dependence at one stage in the life cycle of the plant. In version 1 density dependence was assumed to operate during germination whilst in version 2 it was included at the seedling stage. Density-dependent parameter values for the model were estimated from annual census

  6. A continuous bivariate model for wind power density and wind turbine energy output estimations

    Microsoft Academic Search

    José Antonio Carta; Dunia Mentado

    2007-01-01

    The wind power probability density function is useful in both the design process of a wind turbine and in the evaluation process of the wind resource available at a potential site. The continuous probability models used in the scientific literature to estimate the wind power density distribution function and wind turbine energy output assume that air density is independent of

  7. Classification of EMG signals using wavelet based autoregressive models and neural networks to control prothesis-bionic hand

    Microsoft Academic Search

    I. Yazici; E. Koklukaya; B. Baslo

    2009-01-01

    This work has aimed to contribute to the prothesis-bionic hand studies. Four hundred eighty signals used in this work correspond to position of adduction motion of thumb, flexion motion of thumb, abduction motion of fingers were collected by surface electrodes. Eight healthy has participated for collecting by surface electromyogram (SEMG). The wavelet based autoregressive models of collected signals are used

  8. Wavelet-based correlations of impedance cardiography signals and heart rate variability

    NASA Astrophysics Data System (ADS)

    Podtaev, Sergey; Dumler, Andrew; Stepanov, Rodion; Frick, Peter; Tziberkin, Kirill

    2010-04-01

    The wavelet-based correlation analysis is employed to study impedance cardiography signals (variation in the impedance of the thorax z(t) and time derivative of the thoracic impedance (- dz/dt)) and heart rate variability (HRV). A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. The modulus of wavelet-correlation function shows the level of correlation, and the phase indicates the mean phase shift of oscillations at the given scale (frequency). Significant correlations essentially exceeding the values obtained for noise signals are defined within two spectral ranges, which correspond to respiratory activity (0.14-0.5 Hz), endothelial related metabolic activity and neuroendocrine rhythms (0.0095-0.02 Hz). Probably, the phase shift of oscillations in all frequency ranges is related to the peculiarities of parasympathetic and neuro-humoral regulation of a cardiovascular system.

  9. A new algorithm for wavelet-based heart rate variability analysis

    E-print Network

    García, Constantino A; Vila, Xosé; Márquez, David G

    2014-01-01

    One of the most promising non-invasive markers of the activity of the autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits often provide spectral analysis techniques using the Fourier transform, which assumes that the heart rate series is stationary. To overcome this issue, the Short Time Fourier Transform is often used (STFT). However, the wavelet transform is thought to be a more suitable tool for analyzing non-stationary signals than the STFT. Given the lack of support for wavelet-based analysis in HRV toolkits, such analysis must be implemented by the researcher. This has made this technique underutilized. This paper presents a new algorithm to perform HRV power spectrum analysis based on the Maximal Overlap Discrete Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any spectral band with a given tolerance for the band's boundaries. The MODWPT decomposition tree is pruned to avoid calculating unnecessary wavelet coefficients, thereby optimizing execution t...

  10. Wavelet-based local region-of-interest reconstruction for synchrotron radiation x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Li, Lingqi; Toda, Hiroyuki; Ohgaki, Tomomi; Kobayashi, Masakazu; Kobayashi, Toshiro; Uesugi, Kentaro; Suzuki, Yoshio

    2007-12-01

    Synchrotron radiation x-ray microtomography is becoming a uniquely powerful method to nondestructively access three-dimensional internal microstructure in biological and engineering materials, with a resolution of 1?m or less. The tiny field of view of the detector, however, requires that the sample has to be strictly small, which would limit the practical applications of the method such as in situ experiments. In this paper, a wavelet-based local tomography algorithm is proposed to recover a small region of interest inside a large object only using the local projections, which is motivated by the localization property of wavelet transform. Local tomography experiment for an Al-Cu alloy is carried out at SPring-8, the third-generation synchrotron radiation facility in Japan. The proposed method readily enables the high-resolution observation for a large specimen, by which the applicability of the current microtomography would be promoted to a large extent.

  11. Wavelet-based built-in damage detection and identification for composites

    NASA Astrophysics Data System (ADS)

    Yan, G.; Zhou, Lily L.; Yuan, F. G.

    2005-05-01

    In this paper, a wavelet-based built-in damage detection and identification algorithm for carbon fiber reinforced polymer (CFRP) laminates is proposed. Lamb waves propagating in laminates are first modeled analytically using higher-order plate theory and compared them with experimental results in terms of group velocity. Distributed piezoelectric transducers are used to generate and monitor the fundamental ultrasonic Lamb waves in the laminates with narrowband frequencies. A signal processing scheme based on wavelet analysis is applied on the sensor signals to extract the group velocity of the wave propagating in the laminates. Combined with the theoretically computed wave velocity, a genetic algorithms (GA) optimization technique is employed to identify the location and size of the damage. The applicability of this proposed method to detect and size the damage is demonstrated by experimental studies on a composite plate with simulated delamination damages.

  12. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  13. Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.

    2013-12-01

    The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.

  14. Evaluation of a new wavelet-based compression algorithm for synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Tian, Jun; Guo, Haitao; Wells, Raymond O., Jr.; Burrus, C. Sidney; Odegard, Jan E.

    1996-06-01

    In this paper we will discuss the performance of a new wavelet based embedded compression algorithm on synthetic aperture radar (SAR) image data. This new algorithm uses index coding on the indices of the discrete wavelet transform of the image data and provides an embedded code to successively approximate it. Results on compressing still images, medical images as well as seismic traces indicate that the new algorithm performs quite competitively with other image compression algorithms. The evaluation for SAR image compression of it will be presented in this paper. One advantage of the new algorithm presented here is that the compressed data is encoded in such a way as to facilitate processing in the compressed wavelet domain, which is a significant aspect considering the rate at which SAR data is collected and the desire to process the data 'near real time'.

  15. Wavelet-based correlation (WBC) of zoned crystal populations and magma mixing

    NASA Astrophysics Data System (ADS)

    Wallace, Glen S.; Bergantz, George W.

    2002-08-01

    Magma mixing is a common process and yet the rates, kinematics and numbers of events are difficult to establish. One expression of mixing is the major, trace element, and isotopic zoning in crystals, which provides a sequential but non-monotonic record of the creation and dissipation of volumes of distinct chemical potential. We demonstrate a wavelet-based correlation (WBC) technique that uses this zoning for the recognition of the minimum number of mixing, or open-system events, and the criteria for identifying populations of crystals that have previously shared a mixing event. When combined with field observations of the spatial distribution of crystal populations, WBC provides a statistical link between the time-varying thermodynamic and fluid dynamic history of the magmatic system. WBC can also be used as a data mining utility to reveal open-system events where outcrop is sparse. An analysis of zoned plagioclase from the Tuolumne Intrusive Suite provides a proof of principle for WBC.

  16. An Evaluation of the Accuracy of Kernel Density Estimators for Home Range Analysis

    Microsoft Academic Search

    D. Erran Seaman; Roger A. Powell

    2008-01-01

    Abstract. Kernel density estimators are becoming more widely used, particularly as home range estimators. Despite extensive interest in their theoretical properties, little em- pirical research,has been,done,to investigate,their performance,as home,range estimators. We used,computer,simulations,to compare,the area and shape,of kernel density estimates to the true area and shape,of multimodal,two-dimensional,distributions. The fixed kernel gave,area estimates,with very little bias when,least squares,cross validation was,used to select

  17. Multi-dimensional Density Estimation David W. Scott a,,1

    E-print Network

    Scott, David W.

    -validation, Curse of dimensionality, Exploratory data analysis, Frequency polygons, Histograms, Kernel estimators at Denver, Denver, CO 80217-3364 USA Abstract Modern data analysis requires a number of tools to undercover of techniques and a willingness to go beyond simple univariate methodologies. Many experimental scientists today

  18. A New Computational Approach to Density Estimation with ...

    E-print Network

    2003-12-19

    According to the level of difficulty of SDP to be solved later, we employed. (a) Optimization by ..... estimation of a survival function for medical data etc. This is another .... ing Positive Functions. Ph. D. Thesis, SUNY, Buffalo, New York, 1976. 18 ...

  19. Asymptotic equivalence of density estimation and Gaussian white noise

    Microsoft Academic Search

    Michael Nussbaum

    1996-01-01

    Signal recovery in Gaussian white noise with variance tending to zero has served for some time as a representative model for nonparametric curve estimation, having all the essential traits in a pure form. The equivalence has mostly been stated informally, but an approximation in the sense of Le Cam's deficiency distance $\\\\Delta$ would make it precise. The models are then

  20. Estimates of cetacean abundance, biomass, and population density are

    E-print Network

    be affected by anthropogenic sound (e.g., sonar, ship noise, and seismic surveys) and cli- mate change., 1997). Large whales also die from ship strikes (Carretta et al., 2006). West coast cetaceans may of cetaceans along the U.S. west coast were estimated from ship surveys conducted in the summer and fall

  1. Estimating insect flight densities from attractive trap catches and flight height distributions.

    PubMed

    Byers, John A

    2012-05-01

    Methods and equations have not been developed previously to estimate insect flight densities, a key factor in decisions regarding trap and lure deployment in programs of monitoring, mass trapping, and mating disruption with semiochemicals. An equation to estimate densities of flying insects per hectare is presented that uses the standard deviation (SD) of the vertical flight distribution, trapping time, the trap's spherical effective radius (ER), catch at the mean flight height (as estimated from a best-fitting normal distribution with SD), and an estimated average flight speed. Data from previous reports were used to estimate flight densities with the equations. The same equations can use traps with pheromone lures or attractive colors with a measured effective attraction radius (EAR) instead of the ER. In practice, EAR is more useful than ER for flight density calculations since attractive traps catch higher numbers of insects and thus can measure lower populations more readily. Computer simulations in three dimensions with varying numbers of insects (density) and varying EAR were used to validate the equations for density estimates of insects in the field. Few studies have provided data to obtain EAR, SD, speed, and trapping time to estimate flight densities per hectare. However, the necessary parameters can be measured more precisely in future studies. PMID:22527056

  2. Daytime fog detection and density estimation with entropy minimization

    NASA Astrophysics Data System (ADS)

    Caraffa, L.; Tarel, J. P.

    2014-08-01

    Fog disturbs the proper image processing in many outdoor observation tools. For instance, fog reduces the visibility of obstacles in vehicle driving applications. Usually, the estimation of the amount of fog in the scene image allows to greatly improve the image processing, and thus to better perform the observation task. One possibility is to restore the visibility of the contrasts in the image from the foggy scene image before applying the usual image processing. Several algorithms were proposed in the recent years for defogging. Before to apply the defogging, it is necessary to detect the presence of fog, not to emphasis the contrasts due to noise. Surprisingly, few a reduced number of image processing algorithms were proposed for fog detection and characterization. Most are dedicated to static cameras and can not be used when the camera is moving. Daytime fog is characterized by its extinction coefficient, which is equivalent to the visibility distance. A visibility-meter can be used for fog detection and characterization, but this kind of sensor performs an estimation in a relatively small volume of air, and is thus sensitive to heterogeneous fog, and air turbulence with moving cameras. In this paper, we propose an original algorithm, based on entropy minimization, to detect fog and estimate its extinction coefficient by the processing of stereo pairs. This algorithm is fast, provides accurate results using low cost stereo camera sensor and, the more important, can work when the cameras are moving. The proposed algorithm is evaluated on synthetic and camera images with ground truth. Results show that the proposed method is accurate, and, combined with a fast stereo reconstruction algorithm, should provide a solution, close to real time, for fog detection and visibility estimation for moving sensors.

  3. Unbiased SVM Density Estimation with Application to Graphical Pattern Recognition

    Microsoft Academic Search

    Edmondo Trentin; Ernesto Di Iorio

    2007-01-01

    Classification of structured data (i.e., data that are repre- sented as graphs) is a topic of interest in the machine learning community. This paper presents a different, simple approach to the problem of struc- tured pattern recognition, relying on the description of graphs in terms of algebraic binary relations. Maximum-a-posteriori decision rules over relations require the estimation of class-conditional probability

  4. Estimated global nitrogen deposition using NO2 column density

    USGS Publications Warehouse

    Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

    2013-01-01

    Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m?2 yr?1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m?2 yr?1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

  5. The estimation of the gradient of a density function, with applications in pattern recognition

    Microsoft Academic Search

    KEINOSUKE FUKUNAGA; LARRY D. HOSTETLER

    1975-01-01

    Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in ak-nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality

  6. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  7. Radiation Pressure Detection and Density Estimate for 2011 MD

    NASA Astrophysics Data System (ADS)

    Micheli, Marco; Tholen, David J.; Elliott, Garrett T.

    2014-06-01

    We present our astrometric observations of the small near-Earth object 2011 MD (H ~ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5?. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of \\rho = (640+/- 330) kg \\, m ^{-3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

  8. Semi-automated forest stand delineation using wavelet based segmentation of very high resolution optical imagery

    Microsoft Academic Search

    F. M. B. Van Coillie; L. P. C. Verbeke; R. R. De Wulf

    Stand delineation is one of the cornerstones of forest inventory mapping and a key element to spatial aspects in forest management\\u000a decision making. Stands are forest management units with similarity in attributes such as species composition, density, closure,\\u000a height and age. Stand boundaries are traditionally estimated through subjective visual air photo interpretation. In this paper,\\u000a an automatic stand delineation method

  9. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Storm, G.L.; Cottam, D.F.; Yahner, R.H.; Nichols, J.D.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  10. Current-density estimation of exercise-induced ischemia in patients with multivessel coronary artery disease

    Microsoft Academic Search

    Jukka Nenonen; Katja Pesola; Kirsi Lauerma; Panu Takala; Juhani Knuuti; Lauri Toivonen; Toivo Katila

    2001-01-01

    Magnetocardiographic and body surface potential mapping data measured in 6 patients with multivessel coronary artery disease were used in equivalent current-density estimation (CDE). Patient-specific boundary-element torso models were acquired from magnetic resonance images. Positron emission tomography data registrated with anatomical magnetic resonance imaging data provided the gold standard. Discrete current-density estimation values were computed on the epicardial surface of the

  11. Confident estimation for density of a biological population based on line transect sampling

    Microsoft Academic Search

    Ren-bin Gong; Yun-bei Ma; Yong Zhou

    2010-01-01

    Line transect sampling is a very useful method in survey of wildlife population. Confident interval estimation for density\\u000a D of a biological population is proposed based on a sequential design. The survey area is occupied by the population whose\\u000a size is unknown. A stopping rule is proposed by a kernel-based estimator of density function of the perpendicular data at\\u000a a

  12. Density Estimation with Confidence Sets Exemplified by Superclusters and Voids in the Galaxies

    Microsoft Academic Search

    Kathryn Roeder

    1990-01-01

    A method is presented for forming both a point estimate and a confidence set of semiparametric densities. The final product is a three-dimensional figure that displays a selection of density estimates for a plausible range of smoothing parameters. The boundaries of the smoothing parameter are determined by a nonparametric goodness-of-fit test that is based on the sample spacings. For each

  13. Density meter algorithm and system for estimating sampling/mixing uncertainty

    SciTech Connect

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

  14. Density meter algorithm and system for estimating sampling/mixing uncertainty

    SciTech Connect

    Shine, E P

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

  15. In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...

  16. The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition

    E-print Network

    Hays, Graeme

    The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition Thomas K scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb of these low energy densities for species feeding on jellyfish are discussed. © 2007 Elsevier B.V. All rights

  17. The estimation of cell density in isotropic microcellular polymeric foams using the critical bubble lattice

    Microsoft Academic Search

    Piyapong Buahom; Surat Areerat

    2011-01-01

    In this study, models for estimating the cell density of isotropic polymeric foams using the surface cell density were developed. The basic morphological unit cell for these models is a gas-filled pentagonal dodecahedral cell cavity. The critical bubble lattice model was introduced to associate the packing structure of the pentagonal dodecahedral cells with a face-centered cubic (FCC) packing structure, and

  18. Density meter algorithm and system for estimating sampling\\/mixing uncertainty

    Microsoft Academic Search

    Shine

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling\\/mixing and measurement uncertainties in the process and to provide a

  19. Density meter algorithm and system for estimating sampling\\/mixing uncertainty

    Microsoft Academic Search

    Shine

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling\\/mixing and measurement uncertainties in the process and to provide a

  20. Sensitivity analysis and density estimation for finite-time ruin probabilities

    E-print Network

    Paris-Sud XI, Université de

    solvency regulations in Europe. This problem is closely related to that of density estimation since - x (x-continuous) density functions of infima of reserve processes commonly used in insurance. In particular we show, using words: Ruin probability, Malliavin calculus, insurance, integration by parts. MSC Classification codes

  1. Estimating beaked whale density from single hydrophones by means of propagation modeling

    E-print Network

    Thomas, Len

    Estimating beaked whale density from single hydrophones by means of propagation modeling Elizabeth Warfare Center) #12;Outline Overview of DECAF project Blainville's beaked whales Study area and available of DECAF project Blainville's beaked whales Study area and available acoustic data How do we estimate

  2. Density estimation for small mammals from livetrapping grids: rodents in northern Canada

    E-print Network

    Krebs, Charles J.

    on livetrapping grids with 4 estimators applied to 3 species of boreal forest and 3 species of tundra rodents), and 56 trapping sessions from tundra areas of Herschel Island and Komakuk Beach in northern Yukon (n 5 1 to 25 animals/ha. For tundra rodents both boundary-strip methods produced density estimates smaller than

  3. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  4. Comparison of Fish Density Estimates from Repeated Hydroacoustic Surveys on Two Wyoming Waters

    Microsoft Academic Search

    R. Scott Gangl; Roy A. Whaley

    2004-01-01

    The ability to actively sample fish populations is a major advantage of hydroacoustic assessment. This technique does not affect fish behavior, and it typically produces more precise abundance estimates than do other gears. Thus, hydroacoustic surveys repeated on a closed population should produce similar fish density estimates. We sought to demonstrate this on inland waters using multiplexed side- and down-looking

  5. A bound for the smoothing parameter in certain well-known nonparametric density estimators

    NASA Technical Reports Server (NTRS)

    Terrell, G. R.

    1980-01-01

    Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

  6. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  7. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  8. Estimations of bulk geometrically necessary dislocation density using high resolution EBSD.

    PubMed

    Ruggles, T J; Fullwood, D T

    2013-10-01

    Characterizing the content of geometrically necessary dislocations (GNDs) in crystalline materials is crucial to understanding plasticity. Electron backscatter diffraction (EBSD) effectively recovers local crystal orientation, which is used to estimate the lattice distortion, components of the Nye dislocation density tensor (?), and subsequently the local bulk GND density of a material. This paper presents a complementary estimate of bulk GND density using measurements of local lattice curvature and strain gradients from more recent high resolution EBSD (HR-EBSD) methods. A continuum adaptation of classical equations for the distortion around a dislocation are developed and used to simulate random GND fields to validate the various available approximations of GND content. PMID:23751207

  9. Wavelet-based representations for a class of self-similar signals with application to fractal modulation

    Microsoft Academic Search

    Gregory W. Wornell; Alan V. Oppenheim

    1992-01-01

    A potentially important family of self-similar signals based upon a deterministic scale-invariance characterization is introduced. These signals, which are referred to as `dy-homogeneous' signals because they generalize the well-known homogeneous functions, have highly convenient representations in terms of orthonormal wavelet bases. In particular, wavelet representations can be exploited to construct orthonormal self-similar bases for these signals. The spectral and fractal

  10. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  11. Wavelet-based decomposition and analysis of structural patterns in astronomical images

    NASA Astrophysics Data System (ADS)

    Mertens, Florent; Lobanov, Andrei

    2015-02-01

    Context. Images of spatially resolved astrophysical objects contain a wealth of morphological and dynamical information, and effectively extracting this information is of paramount importance for understanding the physics and evolution of these objects. The algorithms and methods currently employed for this purpose (such as Gaussian model fitting) often use simplified approaches to describe the structure of resolved objects. Aims: Automated (unsupervised) methods for structure decomposition and tracking of structural patterns are needed for this purpose to be able to treat the complexity of structure and large amounts of data involved. Methods: We developed a new wavelet-based image segmentation and evaluation (WISE) method for multiscale decomposition, segmentation, and tracking of structural patterns in astronomical images. Results: The method was tested against simulated images of relativistic jets and applied to data from long-term monitoring of parsec-scale radio jets in 3C 273 and 3C 120. Working at its coarsest resolution, WISE reproduces the previous results of a model-fitting evaluation of the structure and kinematics in these jets exceptionally well. Extending the WISE structure analysis to fine scales provides the first robust measurements of two-dimensional velocity fields in these jets and indicates that the velocity fields probably reflect the evolution of Kelvin-Helmholtz instabilities that develop in the flow.

  12. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis

    PubMed Central

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

  13. Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds

    E-print Network

    Arshakian, T G

    2015-01-01

    We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well defined spatial properties, we calibrate the method and test its limitations to ultimately apply it to a set of observed maps. We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behavior without prominent scales. Observational noise and a finite map size limit the scales where the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from the observational noise is negligible for signal-to-noise ratios >5. (abbrev). Applic...

  14. Wavelet-based double-difference seismic tomography with sparsity regularization

    NASA Astrophysics Data System (ADS)

    Fang, Hongjian; Zhang, Haijiang

    2014-11-01

    We have developed a wavelet-based double-difference (DD) seismic tomography method. Instead of solving for the velocity model itself, the new method inverts for its wavelet coefficients in the wavelet domain. This method takes advantage of the multiscale property of the wavelet representation and solves the model at different scales. A sparsity constraint is applied to the inversion system to make the set of wavelet coefficients of the velocity model sparse. This considers the fact that the background velocity variation is generally smooth and the inversion proceeds in a multiscale way with larger scale features resolved first and finer scale features resolved later, which naturally leads to the sparsity of the wavelet coefficients of the model. The method is both data- and model-adaptive because wavelet coefficients are non-zero in the regions where the model changes abruptly when they are well sampled by ray paths and the model is resolved from coarser to finer scales. An iteratively reweighted least squares procedure is adopted to solve the inversion system with the sparsity regularization. A synthetic test for an idealized fault zone model shows that the new method can better resolve the discontinuous boundaries of the fault zone and the velocity values are also better recovered compared to the original DD tomography method that uses the first-order Tikhonov regularization.

  15. Matrix-free application of Hamiltonian operators in Coifman wavelet bases

    NASA Astrophysics Data System (ADS)

    Acevedo, Ramiro; Lombardini, Richard; Johnson, Bruce R.

    2010-06-01

    A means of evaluating the action of Hamiltonian operators on functions expanded in orthogonal compact support wavelet bases is developed, avoiding the direct construction and storage of operator matrices that complicate extension to coupled multidimensional quantum applications. Application of a potential energy operator is accomplished by simple multiplication of the two sets of expansion coefficients without any convolution. The errors of this coefficient product approximation are quantified and lead to use of particular generalized coiflet bases, derived here, that maximize the number of moment conditions satisfied by the scaling function. This is at the expense of the number of vanishing moments of the wavelet function (approximation order), which appears to be a disadvantage but is shown surmountable. In particular, application of the kinetic energy operator, which is accomplished through the use of one-dimensional (1D) [or at most two-dimensional (2D)] differentiation filters, then degrades in accuracy if the standard choice is made. However, it is determined that use of high-order finite-difference filters yields strongly reduced absolute errors. Eigensolvers that ordinarily use only matrix-vector multiplications, such as the Lanczos algorithm, can then be used with this more efficient procedure. Applications are made to anharmonic vibrational problems: a 1D Morse oscillator, a 2D model of proton transfer, and three-dimensional vibrations of nitrosyl chloride on a global potential energy surface.

  16. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  17. Forward solving in Electrical Impedance Tomography with algebraic multigrid wavelet based preconditioners

    NASA Astrophysics Data System (ADS)

    Borsic, A.; Bayford, R.

    2010-04-01

    Electrical Impedance Tomography is a soft-field tomography modality, where image reconstruction is formulated as a non-linear least-squares model fitting problem. The Newton-Rahson scheme is used for actually reconstructing the image, and this involves three main steps: forward solving, computation of the Jacobian, and the computation of the conductivity update. Forward solving relies typically on the finite element method, resulting in the solution of a sparse linear system. In typical three dimensional biomedical applications of EIT, like breast, prostate, or brain imaging, it is desirable to work with sufficiently fine meshes in order to properly capture the shape of the domain, of the electrodes, and to describe the resulting electric filed with accuracy. These requirements result in meshes with 100,000 nodes or more. The solution the resulting forward problems is computationally intensive. We address this aspect by speeding up the solution of the FEM linear system by the use of efficient numeric methods and of new hardware architectures. In particular, in terms of numeric methods, we solve the forward problem using the Conjugate Gradient method, with a wavelet-based algebraic multigrid (AMG) preconditioner. This preconditioner is faster to set up than other AMG preconditoiners which are not based on wavelets, it does use less memory, and provides for a faster convergence. We report results for a MATLAB based prototype algorithm an we discuss details of a work in progress for a GPU implementation.

  18. Estimation of mechanical properties of panels based on modal density and mean mobility measurements

    NASA Astrophysics Data System (ADS)

    Elie, Benjamin; Gautier, François; David, Bertrand

    2013-11-01

    The mechanical characteristics of wood panels used by instrument makers are related to numerous factors, including the nature of the wood or characteristic of the wood sample (direction of fibers, micro-structure nature). This leads to variations in Young's modulus, the mass density, and the damping coefficients. Existing methods for estimating these parameters are not suitable for instrument makers, mainly because of the need of expensive experimental setups, or complicated protocols, which are not adapted to a daily practice in a workshop. In this paper, a method for estimating Young's modulus, the mass density, and the modal loss factors of flat panels, requiring a few measurement points and an affordable experimental setup, is presented. It is based on the estimation of two characteristic quantities: the modal density and the mean mobility. The modal density is computed from the values of the modal frequencies estimated by the subspace method ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques), associated with the signal enumeration technique ESTER (ESTimation of ERror). This modal identification technique is proved to be robust in the low- and the mid-frequency domains, i.e. when the modal overlap factor does not exceed 1. The estimation of the modal parameters also enables the computation of the modal loss factor in the low- and the mid-frequency domains. An experimental fit with the theoretical expressions for the modal density and the mean mobility enables an accurate estimation of Young's modulus and the mass density of flat panels. A numerical and an experimental study show that the method is robust, and that it requires solely a few measurement points.

  19. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  20. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  1. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  2. Efficient estimation of power spectral density from laser Doppler anemometer data

    NASA Astrophysics Data System (ADS)

    Nobach, H.; Müller, E.; Tropea, C.

    A non-biased estimator of power spectral density (PSD) is introduced for data obtained from a zeroth order interpolated laser Doppler anemometer (LDA) data set. The systematic error, sometimes referred to as the ``particle-rate filter'' effect, is removed using an FIR filter parameterized using the mean particle rate. Independent from this, a procedure for estimating the measurement system noise is introduced and applied to the estimated spectra. The spectral estimation is performed in the domain of the autocorrelation function and assumes no further process parameters. The new technique is illustrated using simulated and measured data, in the latter case with direct comparison to simultaneously acquired hot-wire data.

  3. Effects of tissue heterogeneity on the optical estimate of breast density

    PubMed Central

    Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2012-01-01

    Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635–1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

  4. Effects of tissue heterogeneity on the optical estimate of breast density.

    PubMed

    Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2012-10-01

    Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635-1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

  5. Volumetric Breast Density Estimation from Full-Field Digital Mammograms: A Validation Study

    PubMed Central

    Gubern-Mérida, Albert; Kallenberg, Michiel; Platel, Bram; Mann, Ritse M.; Martí, Robert; Karssemeijer, Nico

    2014-01-01

    Objectives To objectively evaluate automatic volumetric breast density assessment in Full-Field Digital Mammograms (FFDM) using measurements obtained from breast Magnetic Resonance Imaging (MRI). Material and Methods A commercially available method for volumetric breast density estimation on FFDM is evaluated by comparing volume estimates obtained from 186 FFDM exams including mediolateral oblique (MLO) and cranial-caudal (CC) views to objective reference standard measurements obtained from MRI. Results Volumetric measurements obtained from FFDM show high correlation with MRI data. Pearson’s correlation coefficients of 0.93, 0.97 and 0.85 were obtained for volumetric breast density, breast volume and fibroglandular tissue volume, respectively. Conclusions Accurate volumetric breast density assessment is feasible in Full-Field Digital Mammograms and has potential to be used in objective breast cancer risk models and personalized screening. PMID:24465808

  6. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing

    PubMed Central

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.

    2014-01-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  7. Statistically significant contrasts between EMG waveforms revealed using wavelet-based functional ANOVA

    PubMed Central

    McKay, J. Lucas; Welch, Torrence D. J.; Vidakovic, Brani

    2013-01-01

    We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA (P < 0.02). Furthermore, wfANOVA required significantly fewer (?¼×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision (r2 = 0.94 ± 0.08) and performed better than tANOVA across noise levels (P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power. PMID:23100136

  8. Directly Linking Magma Chamber Dynamics and Crystal Zoning: The Wavelet Based Correlation (WBC) of Crystal Populations

    NASA Astrophysics Data System (ADS)

    Wallace, G. S.; Bergantz, G. W.

    2001-12-01

    Igneous rocks often show evidence for repeated mixing of distinctive magmas and/or redistribution of within-chamber chemical domains. This is expressed by hybridization trends and changes in isotope ratios at the outcrop and crystal scale, composite dikes, crystal transfer fabrics, and flow structures. We will demonstrate the use of Wavelet Based Correlation (WBC) of crystal zoning populations as a means of 'inverting' for the schedule of magma generation, mixing, crystal growth, and eruption in a structured time-stratigraphic framework.WBC is a new tool that uses the Continuous Wavelet Transform (CWT) to characterize zoning profiles, correlation coefficients of select sets of zoning features to describe crystal similarity, and cluster analysis of correlation coefficients to group crystals into populations. The integrating concepts are the notions of spatial proximity, both within and between samples, of statistical groupings of crystals (clusters) that have experienced a similar thermo-chemo environment at some previous time, and their dispersal and gathering to form new families of clusters. This allows for the construction of a crystal-based phylogeny for the magmatic system where mixing and fractionation events can be ordered and recognized as acting in sequence or in parallel, and the vigor and duration of a mixing event can be inferred from particle dispersal, gathering and zoning. CWT decomposition allows direct comparison of specific components of crystal zoning patterns because the locations of individual spectral features are preserved. For example, boundary layer diffusion growth effects, rapid mixing events and pressure changes tend to have small scales. Using WBC, the data can be windowed in scale space to isolate small-scale details in the profile independent of all other scales of features in the profile. Conversely, large-scale features such as fractional crystallization trends can be isolated in the zoning signal. WBC can provide a statistical binding point between geochemical and dynamic studies of igneous systems.

  9. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  10. A comprehensive evaluation of the heparin-manganese precipitation procedure for estimating high density lipoprotein cholesterol

    Microsoft Academic Search

    G. Russell Warnick; John J. Albers

    The accurate quantitation of high density lipo- proteins has recently assumed greater importance in view of studies suggesting their negative correlation with coronary heart disease. High density lipoproteins may be estimated by measuring cholesterol in the plasma frac- tion of d > 1.063 g\\/ml. A more practical approach is the specific precipitation of apolipoprotein B (apoB)-contain- ing lipoproteins by sulfated

  11. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

  12. Trap array configuration influences estimates and precision of black bear density and abundance.

    PubMed

    Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

  13. New density estimates of a threatened sifaka species (Propithecus coquereli) in Ankarafantsika National Park.

    PubMed

    Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès

    2014-06-01

    Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250

  14. Mid-latitude Ionospheric Storms Density Gradients, Winds, and Drifts Estimated from GPS TEC Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.

    2012-12-01

    Ionospheric storm processes at mid-latitudes stand in stark contrast to the typical quiescent behavior. Storm enhanced density (SED) on the dayside affects continent-sized regions horizontally and are often associated with a plume that extends poleward and upward into the nightside. One proposed cause of this behavior is the sub-auroral polarization stream (SAPS) acting on the SED, and neutral wind effects. The electric field and its effect connecting mid-latitude and polar regions are just beginning to be understood and modeled. Another possible coupling effect is due to neutral winds, particularly those generated at high latitudes by joule heating effects. Of particular interest are electric fields and winds along the boundaries of the SED and plume, because these may be at least partly a cause of sharp horizontal electron density gradients. Thus, it is important to understand what bearing the drifts and winds, and any spatial variations in them (e.g., shear), have on the structure of the enhancement, particularly at its boundaries. Imaging techniques based on GPS TEC play a significant role in study of mid-latitude storm dynamics, particularly at mid-latitudes, where sampling of the ionosphere with ground-based GPS lines of sight is most dense. Ionospheric Data Assimilation 4-Dimensional (IDA4D) is a plasma density estimation algorithm that has been used in a number of scientific investigations over several years. Recently, efforts to estimate drivers of the mid-latitude ionosphere, focusing on electric-field-induced drifts and neutral winds, based on GPS TEC high-resolution imaging have shown promise. Estimating Ionospheric Parameters from Ionospheric Reverse Engineering (EMPIRE) is a tool developed that addresses this kind of investigation. In this work electron density and driver estimates are presented for an ionospheric storm using IDA4D in conjunction with EMPIRE. The IDA4D estimates resolve F-region electron densities at 1-degree resolution at the region of passage of the SED and associated plume. High-resolution imaging is used in conjunction with EMPIRE to deduce the dominant drivers. Starting with a baseline Weimer 2001 electric potential model, adjustments to the Weimer model are estimated for the given storm based on the IDA4D-derived densities to show electric fields associated with the plume. These regional densities and drivers are compared to CHAMP and DMSP data that are proximal for validation. Gradients in electron density are numerically computed over the 1-degree region. These density gradients are correlated with the drift estimates to identify a possible causal relationship in the formation of the boundaries of the SED.

  15. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J.A.; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.

    2009-01-01

    1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

  16. Hierarchical models for estimating density from DNA mark-recapture studies.

    PubMed

    Gardner, Beth; Royle, J Andrew; Wegan, Michael T

    2009-04-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps (e.g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS. PMID:19449704

  17. Productivity and population density estimates of the dengue vector mosquito Aedes aegypti (Stegomyia aegypti) in Australia.

    PubMed

    Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

    2013-09-01

    New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. PMID:23205694

  18. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J.A.; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  19. A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder

    NASA Astrophysics Data System (ADS)

    Schroth-Miller, Madeline L.

    Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.

  20. Using Stopping Rules to Bound the Mean Integrated Squared Error in Density Estimation

    Microsoft Academic Search

    Adam T. Martinsek

    1992-01-01

    Suppose $X_1,X_2,\\\\ldots,X_n$ are i.i.d. with unknown density $f$. There is a well-known expression for the asymptotic mean integrated squared error (MISE) in estimating $f$ by a kernel estimate $\\\\hat{f}_n$, under certain conditions on $f$, the kernel and the bandwidth. Suppose that one would like to choose a sample size so that the MISE is smaller than some preassigned positive number

  1. Plug-In Two-Stage Normal Density Estimation Under MISE Loss: Unknown Variance

    Microsoft Academic Search

    Nitis Mukhopadhyay; William Pepe

    2009-01-01

    Consider independent observations X1, X2,… having a common normal probability density function with ? ? < x < ? and unknown variance ? ( > 0). We propose to estimate f(x;?) by a plug-in maximum likelihood (ML) two-stage estimator under the mean integrated squared error (MISE) loss function. Our goal is to make the associated risk not to exceed a preassigned positive number c, referred to as the

  2. Wavelet-based SAR images despeckling using joint hidden Markov model

    NASA Astrophysics Data System (ADS)

    Li, Qiaoliang; Wang, Guoyou; Liu, Jianguo; Chen, Shaobo

    2007-11-01

    In the past few years, wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the deficiency for taking account of intrascale correlations that exist among neighboring wavelet coefficients. In this paper, we propose to develop a joint hidden Markov model by fusing the wavelet Bayesian denoising technique with an image regularization procedure based on HMT and Markov random field (MRF). The Expectation Maximization algorithm is used to estimate hyperparameters and specify the mixture model. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. It is shown that the joint method outperforms lee filter and standard HMT techniques in terms of the integrative measure of the equivalent number of looks (ENL) and Pratt's figure of merit(FOM), especially when dealing with speckle noise in large variance.

  3. Estimating lung burdens based on individual particle density estimated from scanning electron microscopy and cascade impactor samples.

    PubMed

    Miller, Frederick J; Kaczmar, Swiatoslav W; Danzeisen, Ruth; Moss, Owen R

    2013-12-01

    Workplace air is monitored for overall dust levels and for specific components of the dust to determine compliance with occupational and workplace standards established by regulatory bodies for worker health protection. Exposure monitoring studies were conducted by the International Copper Association (ICA) at various industrial facilities around the world working with copper. Individual cascade impactor stages were weighed to determine the total amount of dust collected on the stage, and then the amounts of soluble and insoluble copper and other metals on each stage were determined; speciation was not determined. Filter samples were also collected for scanning electron microscope analysis. Retrospectively, there was an interest in obtaining estimates of alveolar lung burdens of copper in workers engaged in tasks requiring different levels of exertion as reflected by their minute ventilation. However, mechanistic lung dosimetry models estimate alveolar lung burdens based on particle Stoke's diameter. In order to use these dosimetry models the mass-based, aerodynamic diameter distribution (which was measured) had to be transformed into a distribution of Stoke's diameters, requiring an estimation be made of individual particle density. This density value was estimated by using cascade impactor data together with scanning electron microscopy data from filter samples. The developed method was applied to ICA monitoring data sets and then the multiple path particle dosimetry (MPPD) model was used to determine the copper alveolar lung burdens for workers with different functional residual capacities engaged in activities requiring a range of minute ventilation levels. PMID:24304308

  4. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  5. Estimation of Density-Dependent Mortality of Juvenile Bivalves in the Wadden Sea

    PubMed Central

    Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

    2014-01-01

    We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

  6. Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.

    PubMed

    Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

    2014-01-01

    We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

  7. Estimating food portions. Influence of unit number, meal type and energy density????

    PubMed Central

    Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A.

    2013-01-01

    Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7–26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled “meal”, 4 “drink” and 16 “snack”. Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p = 0.01). Single unit foods and those labelled as ‘meal’ or ‘beverage’ were estimated with greater error than multi-unit and ‘snack’ foods (p = 0.02 and p < 0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30–46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948

  8. Identification of the monitoring point density needed to reliably estimate contaminant mass fluxes

    NASA Astrophysics Data System (ADS)

    Liedl, R.; Liu, S.; Fraser, M.; Barker, J.

    2005-12-01

    Plume monitoring frequently relies on the evaluation of point-scale measurements of concentration at observation wells which are located at control planes or `fences' perpendicular to groundwater flow. Depth-specific concentration values are used to estimate the total mass flux of individual contaminants through the fence. Results of this approach, which is based on spatial interpolation, obviously depend on the density of the measurement points. Our contribution relates the accurracy of mass flux estimation to the point density and, in particular, allows to identify a minimum point density needed to achieve a specified accurracy. In order to establish this relationship, concentration data from fences installed in the coal tar creosote plume at the Borden site are used. These fences are characterized by a rather high density of about 7 points/m2 and it is reasonable to assume that the true mass flux is obtained with this point density. This mass flux is then compared with results for less dense grids down to about 0.1points/m2. Mass flux estimates obtained for this range of point densities are analyzed by the moving window method in order to reduce purely random fluctuations. For each position of the moving window the mass flux is estimated and the coefficient of variation (CV) is calculated to quantify variablity of the results. Thus, the CV provides a relative measure of accurracy in the estimated fluxes. By applying this approach to the Borden naphthalene plume at different times, it is found that the point density changes from sufficient to insufficient due to the temporally decreasing mass flux. By comparing the results of naphthalene and phenol at the same fence and at the same time, we can see that the same grid density might be sufficient for one compound but not for another. If a rather strict CV criterion of 5% is used, a grid of 7 points/m2 is shown to allow for reliable estimates of the true mass fluxes only in the beginning of plume development when mass fluxes are high. Long-term data exhibit a very high variation being attributed to the decreasing flux and a much denser grid would be required to reflect the decreasing mass flux with the same high accurracy. However, a less strict CV criterion of 50% may be acceptable due to uncertainties generally associated with other hydrogeologic parameters. In this case, a point density between 1 and 2 points/m2 is found to be sufficient for a set of five tested chemicals.

  9. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  10. Estimating whale density from their whistling activity: Example with St. Lawrence beluga

    Microsoft Academic Search

    Y. Simard; N. Roy; S. Giard; C. Gervaise; M. Conversano; N. Ménard

    2010-01-01

    A passive acoustic method is developed to estimate whale density from their calling activity in a monitored area. The algorithm is applied to a loquacious species, the white whale (Delphinapterus leucas), in Saguenay fjord mouth near Tadoussac, Canada, which is severely affected by shipping noise. Beluga calls were recorded from cabled coastal hydrophones deployed in the basin while the animal

  11. A wind energy analysis of Grenada: an estimation using the ‘Weibull’ density function

    Microsoft Academic Search

    D Weisser

    2003-01-01

    The Weibull density function has been used to estimate the wind energy potential in Grenada, West Indies. Based on historic recordings of mean hourly wind velocity this analysis shows the importance to incorporate the variation in wind energy potential during diurnal cycles. Wind energy assessments that are based on Weibull distribution using average daily\\/seasonal wind speeds fail to acknowledge that

  12. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PMID:24885339

  13. COMBINING BREEDING BIRD SURVEY AND DISTANCE SAMPLING TO ESTIMATE DENSITY OF MIGRANT AND BREEDING BIRDS

    Microsoft Academic Search

    Scott G. Somershoe; Daniel J. Twedt; Bruce Reid

    2006-01-01

    2 Audubon Mississippi, 1208 Washington Street, Vicksburg, MS 39183 Abstract. We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities

  14. Empirical Testing of Fast Kernel Density Estimation Algorithms Dustin Lang Mike Klaas

    E-print Network

    de Freitas, Nando

    Empirical Testing of Fast Kernel Density Estimation Algorithms Dustin Lang Mike Klaas { dalang V6T1Z4 Nando de Freitas Abstract We present results of experiments testing the Fast Gauss Transform it in and forget about it." 2 FAST METHODS In this section, we briefly summarize each of the meth- ods that we test

  15. Estimating the effect of Earth elasticity and variable water density on tsunami speeds

    E-print Network

    Tsai, Victor C.

    Estimating the effect of Earth elasticity and variable water density on tsunami speeds Victor C; revised 25 December 2012; accepted 7 January 2013; published 13 February 2013. [1] The speed of tsunami comparisons of tsunami arrival times from the 11 March 2011 tsunami suggest, however, that the standard

  16. Did the middle class shrink during the 1980s? UK evidence from kernel density estimates

    Microsoft Academic Search

    Stephen P. Jenkins

    1995-01-01

    This paper proposes using kernel density estimation methods to investigate the shrinking middle class hypothesis. The approach reveals striking new evidence of changes in the concentration of middle incomes in the United Kingdom during the 1980s. Breakdowns by family economic status demonstrate that a major cause of the aggregate changes was a moving apart of the income distributions for working

  17. A generalized single linkage method for estimating the cluster tree of a density

    E-print Network

    Washington at Seattle, University of

    approach on several examples. Keywords: Cluster analysis, level set, single linkage clustering, excess massA generalized single linkage method for estimating the cluster tree of a density Werner Stuetzle the presence of distinct groups in a data set and assign group labels to the observations. Nonparametric

  18. ESTIMATION OF SOYBEAN ROOT LENGTH DENSITY DISTRIBUTION WITH DIRECT AND SENSOR BASED MEASUREMENTS OF CLAYPAN MORPHOLOGY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...

  19. Brain tumor cell density estimation from multi-modal MR images based on a synthetic

    E-print Network

    Prastawa, Marcel

    Brain tumor cell density estimation from multi-modal MR images based on a synthetic tumor growth. Abstract. This paper proposes to employ a detailed tumor growth model to synthesize labelled images which can then be used to train an efficient data-driven machine learning tumor predictor. Our MR im- age

  20. Estimating Orangutan Densities Using the Standing Crop and Marked Nest Count Methods: Lessons Learned for Conservation

    E-print Network

    ABSTRACT Reliable estimates of great ape abundance are needed to assess distribution, monitor population during a defined period. We compared orangutan densities calculated by the two methods using data from movement. To produce reliable results, the MNC method may require a similar amount of effort as the SCNC

  1. Estimated number of women likely to benefit from bone mineral density measurement in France

    E-print Network

    Paris-Sud XI, Université de

    Estimated number of women likely to benefit from bone mineral density measurement in France Nassira avenue Lacassagne, 69003, Lyon, France b GH Pitié-Salepétrière, unité Inserm U 360, 75651, Paris cedex 13, France c Institut Gustave Roussy, unité Inserm XR 521, 94805, Villejuif cedex, France d Unité Inserm U

  2. A PATIENT-SPECIFIC CORONARY DENSITY ESTIMATE R. Shahzad 1,2

    E-print Network

    van Vliet, Lucas J.

    with coronary density fields. The steps towards building these atlases, atlas selection, centreline mapping estimate using CTA atlas registration. The method is evaluated by quantifying the overlap of the obtained annotations for 170 CT datasets. Index Terms-- Calcium Score, coronary arteries, CT, CTA, atlas, image

  3. A hybrid approach to crowd density estimation using statistical leaning and texture classification

    NASA Astrophysics Data System (ADS)

    Li, Yin; Zhou, Bowen

    2013-12-01

    Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.

  4. Bioenergetics estimate of the effects of stocking density on hatchery production of smallmouth bass fingerlings

    USGS Publications Warehouse

    Robel, G.L.; Fisher, W.L.

    1999-01-01

    Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

  5. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  6. Statistical computation of Boltzmann entropy and estimation of the optimal probability density function from statistical sample

    NASA Astrophysics Data System (ADS)

    Sui, Ning; Li, Min; He, Ping

    2014-12-01

    In this work, we investigate the statistical computation of the Boltzmann entropy of statistical samples. For this purpose, we use both histogram and kernel function to estimate the probability density function of statistical samples. We find that, due to coarse-graining, the entropy is a monotonic increasing function of the bin width for histogram or bandwidth for kernel estimation, which seems to be difficult to select an optimal bin width/bandwidth for computing the entropy. Fortunately, we notice that there exists a minimum of the first derivative of entropy for both histogram and kernel estimation, and this minimum point of the first derivative asymptotically points to the optimal bin width or bandwidth. We have verified these findings by large amounts of numerical experiments. Hence, we suggest that the minimum of the first derivative of entropy be used as a selector for the optimal bin width or bandwidth of density estimation. Moreover, the optimal bandwidth selected by the minimum of the first derivative of entropy is purely data-based, independent of the unknown underlying probability density distribution, which is obviously superior to the existing estimators. Our results are not restricted to one-dimensional, but can also be extended to multivariate cases. It should be emphasized, however, that we do not provide a robust mathematical proof of these findings, and we leave these issues with those who are interested in them.

  7. Density estimation of small-mammal populations using a trapping web and distance sampling methods

    USGS Publications Warehouse

    Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.

    1983-01-01

    Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

  8. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.

  9. Reader Variability in Breast Density Estimation from Full-Field Digital Mammograms

    PubMed Central

    Keller, Brad M.; Nathan, Diane L.; Gavenonis, Sara C.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2013-01-01

    Rationale and Objectives Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, “for processing”) or vendor postprocessed (ie, “for presentation”) digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual woman’s risk for breast cancer. Materials and Methods Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. Results The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Conclusion Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect. PMID:23465381

  10. Calculation of absolute spectral densities via stochastic estimators of tr{?(E - ?)}

    NASA Astrophysics Data System (ADS)

    Jeffrey, Stephen J.; Smith, Sean C.

    1997-10-01

    The calculation of absolute vibrational spectral densities, tr{?(E - ?)} , is investigated utilizing the stochastic trace estimator technique of Hutchinson. The spectral density is evaluated by a Monte Carlo scheme in which random vectors are sequentially sampled, their spectral density profiles computed and averaged. The requisite matrix elements of ?(E - ?) are evaluated using a Lanczos projection algorithm. The issue of distinguishing degenerate and replicated eigenvalues generated by the Lanczos algorithm is addressed and can be overcome using a recently-developed filter diagonalization scheme. The resulting method is simple, efficient and converges the density of states remarkably quickly for dense spectra. Illustrative calculations are presented for one- and two-dimensional test cases and finally for nitrogen dioxide in the energy range 0-12000 cm -1 using the V11 diabatic surface of Hirsch et al.

  11. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation?

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  12. Wavelet-based Time Series Bootstrap Approach for Multidecadal Hydrologic Projections Using Observed and Paleo Data of Climate Indicators

    NASA Astrophysics Data System (ADS)

    Erkyihun, S. T.

    2013-12-01

    Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.

  13. Estimating density dependence in time-series of age-structured populations.

    PubMed Central

    Lande, R; Engen, S; Saether, B-E

    2002-01-01

    For a life history with age at maturity alpha, and stochasticity and density dependence in adult recruitment and mortality, we derive a linearized autoregressive equation with time-lags of from 1 to alpha years. Contrary to current interpretations, the coefficients for different time-lags in the autoregressive dynamics do not simply measure delayed density dependence, but also depend on life-history parameters. We define a new measure of total density dependence in a life history, D, as the negative elasticity of population growth rate per generation with respect to change in population size, D = - partial differential lnlambda(T)/partial differential lnN, where lambda is the asymptotic multiplicative growth rate per year, T is the generation time and N is adult population size. We show that D can be estimated from the sum of the autoregression coefficients. We estimated D in populations of six avian species for which life-history data and unusually long time-series of complete population censuses were available. Estimates of D were in the order of 1 or higher, indicating strong, statistically significant density dependence in four of the six species. PMID:12396510

  14. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  15. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ? 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ? 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

  16. A note on estimating a non-increasing density in the presence of selection bias

    Microsoft Academic Search

    Hammou El Barmi; Paul I. Nelson

    2002-01-01

    In this paper we construct the non-parametric maximum likelihood estimator (NPMLE) f?n of a non-increasing probability density function f with distribution function F on the basis of a sample from a weighted distribution G with density given byg(x)=w(x)f(x)\\/?(f,w),where w(u)>0 for all u and ?(f,w)=?w(u)f(u)du

  17. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  18. Pyroclastic density current volume estimation after the 2010 Merapi volcano eruption using X-band SAR

    NASA Astrophysics Data System (ADS)

    Bignami, Christian; Ruch, Joel; Chini, Marco; Neri, Marco; Buongiorno, Maria Fabrizia; Hidayati, Sri; Sayudi, Dewi Sri; Surono

    2013-07-01

    Pyroclastic density current deposits remobilized by water during periods of heavy rainfall trigger lahars (volcanic mudflows) that affect inhabited areas at considerable distance from volcanoes, even years after an eruption. Here we present an innovative approach to detect and estimate the thickness and volume of pyroclastic density current (PDC) deposits as well as erosional versus depositional environments. We use SAR interferometry to compare an airborne digital surface model (DSM) acquired in 2004 to a post eruption 2010 DSM created using COSMO-SkyMed satellite data to estimate the volume of 2010 Merapi eruption PDC deposits along the Gendol river (Kali Gendol, KG). Results show PDC thicknesses of up to 75 m in canyons and a volume of about 40 × 106 m3, mainly along KG, and at distances of up to 16 km from the volcano summit. This volume estimate corresponds mainly to the 2010 pyroclastic deposits along the KG - material that is potentially available to produce lahars. Our volume estimate is approximately twice that estimated by field studies, a difference we consider acceptable given the uncertainties involved in both satellite- and field-based methods. Our technique can be used to rapidly evaluate volumes of PDC deposits at active volcanoes, in remote settings and where continuous activity may prevent field observations.

  19. Bayesian nonparametric regression and density estimation using integrated nested Laplace approximations.

    PubMed

    Wang, Xiao-Feng

    2013-06-25

    Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain Monte Carlo techniques, provides accurate approximations to estimate posterior marginals and avoid time-consuming sampling. We show here that two classical nonparametric smoothing problems, nonparametric regression and density estimation, can be achieved using INLA. Simulated examples and R functions are demonstrated to illustrate the use of the methods. Some discussions on potential applications of INLA are made in the paper. PMID:24416633

  20. Estimation of D-region Electron Density using Tweeks Measurements at Nainital and Allahabad

    Microsoft Academic Search

    P. Pant; A. K. Maurya; Rajesh Singh; B. Veenadhari; A. K. Singh

    2010-01-01

    Lightning generated radio atmospheric that propagates over long distances via multiple reflections through the boundaries of the Earth-ionosphere waveguide (EIWG), shows sharp dispersion near the cut-off frequency ~1.8 kHz of the EIWG. These dispersed atmospherics at lower frequency end are called as `tweek' radio atmospherics. In order to estimate D-region electron densities at the ionospheric reflection heights we have utilized

  1. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  2. Road-Based Surveys for Estimating Wild Turkey Density in the Texas Rolling Plains

    Microsoft Academic Search

    MATTHEW J. BUTLER; WARREN B. BALLARD; MARK C. WALLACE; STEPHEN J. DEMASO

    2007-01-01

    Line-transect-based distance sampling has been used to estimate density of several wild bird species including wild turkeys (Meleagris gallopavo). We used inflatable turkey decoys during autumn (Aug-Nov) and winter (Dec-Mar) 2003-2005 at 3 study sites in the Texas Rolling Plains, USA, to simulate Rio Grande wild turkey (M. g. intermedia) flocks. We evaluated detectability of flocks using logistic regression models.

  3. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals.

    PubMed

    Kéry, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J Andrew

    2011-04-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km² (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals. PMID:21166714

  4. Uncertainty quantification techniques for population density estimates derived from sparse open source data

    NASA Astrophysics Data System (ADS)

    Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

    2013-05-01

    The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  5. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

  6. Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

    PubMed

    McCabe, Patrick; Korb, Oliver; Cole, Jason

    2014-05-27

    We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions. PMID:24746022

  7. Density and population estimate of gibbons (Hylobates albibarbis) in the Sabangau catchment, Central Kalimantan, Indonesia.

    PubMed

    Cheyne, Susan M; Thompson, Claire J H; Phillips, Abigail C; Hill, Robyn M C; Limin, Suwido H

    2008-01-01

    We demonstrate that although auditory sampling is a useful tool, this method alone will not provide a truly accurate indication of population size, density and distribution of gibbons in an area. If auditory sampling alone is employed, we show that data collection must take place over a sufficient period to account for variation in calling patterns across seasons. The population of Hylobates albibarbis in the Sabangau catchment, Central Kalimantan, Indonesia, was surveyed from July to December 2005 using methods established previously. In addition, auditory sampling was complemented by detailed behavioural data on six habituated groups within the study area. Here we compare results from this study to those of a 1-month study conducted in 2004. The total population of the Sabangau catchment is estimated to be about in the tens of thousands, though numbers, distribution and density for the different forest subtypes vary considerably. We propose that future density surveys of gibbons must include data from all forest subtypes where gibbons are found and that extrapolating from one forest subtype is likely to yield inaccurate density and population estimates. We also propose that auditory census be carried out by using at least three listening posts (LP) in order to increase the area sampled and the chances of hearing groups. Our results suggest that the Sabangau catchment contains one of the largest remaining contiguous populations of Bornean agile gibbon. PMID:17899314

  8. Density estimation and adaptive bandwidths: A primer for public health practitioners

    PubMed Central

    2010-01-01

    Background Geographic information systems have advanced the ability to both visualize and analyze point data. While point-based maps can be aggregated to differing areal units and examined at varying resolutions, two problems arise 1) the modifiable areal unit problem and 2) any corresponding data must be available both at the scale of analysis and in the same geographic units. Kernel density estimation (KDE) produces a smooth, continuous surface where each location in the study area is assigned a density value irrespective of arbitrary administrative boundaries. We review KDE, and introduce the technique of utilizing an adaptive bandwidth to address the underlying heterogeneous population distributions common in public health research. Results The density of occurrences should not be interpreted without knowledge of the underlying population distribution. When the effect of the background population is successfully accounted for, differences in point patterns in similar population areas are more discernible; it is generally these variations that are of most interest. A static bandwidth KDE does not distinguish the spatial extents of interesting areas, nor does it expose patterns above and beyond those due to geographic variations in the density of the underlying population. An adaptive bandwidth method uses background population data to calculate a kernel of varying size for each individual case. This limits the influence of a single case to a small spatial extent where the population density is high as the bandwidth is small. If the primary concern is distance, a static bandwidth is preferable because it may be better to define the "neighborhood" or exposure risk based on distance. If the primary concern is differences in exposure across the population, a bandwidth adapting to the population is preferred. Conclusions Kernel density estimation is a useful way to consider exposure at any point within a spatial frame, irrespective of administrative boundaries. Utilization of an adaptive bandwidth may be particularly useful in comparing two similarly populated areas when studying health disparities or other issues comparing populations in public health. PMID:20653969

  9. Estimation of scattering phase function utilizing laser Doppler power density spectra.

    PubMed

    Wojtkiewicz, S; Liebert, A; Rix, H; Sawosz, P; Maniewski, R

    2013-02-21

    A new method for the estimation of the light scattering phase function of particles is presented. The method allows us to measure the light scattering phase function of particles of any shape in the full angular range (0°-180°) and is based on the analysis of laser Doppler (LD) power density spectra. The theoretical background of the method and results of its validation using data from Monte Carlo simulations will be presented. For the estimation of the scattering phase function, a phantom measurement setup is proposed containing a LD measurement system and a simple model in which a liquid sample flows through a glass tube fixed in an optically turbid material. The scattering phase function estimation error was thoroughly investigated in relation to the light scattering anisotropy factor g. The error of g estimation is lower than 10% for anisotropy factors larger than 0.5 and decreases with increase of the anisotropy factor (e.g. for g = 0.98, the error of estimation is 0.01%). The analysis of influence of the noise in the measured LD spectrum showed that the g estimation error is lower than 1% for signal to noise ratio higher than 50 dB. PMID:23340453

  10. Density

    NSDL National Science Digital Library

    Mrs. Petersen

    2013-10-28

    Students will explain the concept of and be able to calculate density based on given volumes and masses. Throughout today's assignment, you will need to calculate density. You can find a density calculator at this site. Make sure that you enter the correct units. For most of the problems, grams and cubic centimeters will lead you to the correct answer: Density Calculator What is Density? Visit the following website to answer questions ...

  11. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    PubMed

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ?3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

  12. Heterogeneous Occupancy and Density Estimates of the Pathogenic Fungus Batrachochytrium dendrobatidis in Waters of North America

    PubMed Central

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L?1. The highest density observed was ?3 million zoospores L?1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

  13. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  14. A wavelet-based spectral analysis of long-term time series of optical properties of aerosols obtained by lidar and radiometer measurements over an urban station in Western India

    NASA Astrophysics Data System (ADS)

    Pal, S.; Devara, P. C. S.

    2012-08-01

    Over 700 weekly-spaced vertical profiles of aerosol number density have been archived during 14-year period (October 1986-September 2000) using a bi-static Argon ion lidar system at the Indian Institute of Tropical Meteorology, Pune (18°43?N, 73°51?E, 559 m above mean sea level), India. The monthly resolved time series of aerosol distributions within the atmospheric boundary layer as well as at different altitudes aloft have been subjected to the wavelet-based spectral analysis to investigate different characteristic periodicities present in the long-term dataset. The solar radiometric aerosol optical depth (AOD) measurements over the same place during 1998-2003 have also been analyzed with the wavelet technique. Wavelet spectra of both the time series exhibited significant quasi-annual (around 12-14 months) and quasi-biennial (around 22-25 months) oscillations at statistically significant level. An overview on the lidar and radiometric data sets including the wavelet-based spectral analysis procedure is also presented. A brief statistical analysis concerning both annual and interannual variability of lidar and radiometer derived aerosol distributions has been performed to delineate the effect of different dominant seasons and associated meteorological conditions prevailing over the experimental site in Western India. Additionally, the impact of urbanization on the long-term trends in the lidar measurements of aerosol loadings over the experimental site is brought out. This was achieved by using the lidar observations and a preliminary data set built for inferring the urban aspects of the city of Pune, which included population, number of industries and vehicles etc. in the city.

  15. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    USGS Publications Warehouse

    Boersen, M.R.; Clark, J.D.; King, T.L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  16. On the shiftability of dual-tree complex a globally optimal bilinear programming approach to the design of approximate hilbert pairs of orthonormal wavelet bases

    Microsoft Academic Search

    Jiang Wang; Jian Qiu Zhang

    2010-01-01

    It is understood that the Hilbert transform pairs of orthonormal wavelet bases can only be realized approximately by the scaling filters of conjugate quadrature filter (CQF) banks. In this paper, the approximate FIR realization of the Hilbert transform pairs is formulated as an optimization problem in the sense of the lp (p=1, 2, or infinite) norm minimization on the approximate

  17. Paper #1052 Presented at the International Congress on Ultrasonics, Vienna, April 9 -13, 2007, Session R05: Biomedical Ultrasound Wavelet based deconvolution method in ultrasonic tomography

    E-print Network

    Paris-Sud XI, Université de

    Paper #1052 Presented at the International Congress on Ultrasonics, Vienna, April 9 - 13, 2007, Session R05: Biomedical Ultrasound - 1 - Wavelet based deconvolution method in ultrasonic tomography, lasaygues@lma.cnrs-mrs.fr Abstract: This paper deals with the quantitative and qualitative ultrasonic

  18. 3-D B-spline Wavelet-Based Local Standard Deviation (BWLSD): Its Application to Edge Detection and Vascular Segmentation in Magnetic Resonance Angiography

    Microsoft Academic Search

    Zhenyu He; Albert C. S. Chung

    2010-01-01

    Extracting reliable image edge information is cru- cial for active contour models as well as vascular segmenta- tion in magnetic resonance angiography (MRA). However, conventional edge detection techniques, such as gradient- based methods and wavelet-based methods, are incapable of returning reliable detection responses from low contrast edges in the images. In this paper, we propose a novel edge detection method

  19. Fourier and Wavelet Based Characterisation of the Ionospheric Response to the Solar Eclipse of August, the 11th, 1999, Measured Through 1-minute Vertical Ionospheric Sounding

    Microsoft Academic Search

    P. Sauli; P. Abry; J. Boska

    2004-01-01

    The aim of the present work is to study the ionospheric response induced by the solar eclipse of August, the 11th, 1999. We provide Fourier and wavelet based characterisations of the propagation of the acoustic-gravity waves induced by the solar eclipse. The analysed data consist of profiles of electron concentration. They are derived from 1-minute vertical incidence ionospheric sounding measurements,

  20. A wavelet-based single-view reconstruction approach for cone beam x-ray luminescence tomography imaging

    PubMed Central

    Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing

    2014-01-01

    Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315

  1. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  2. A Recursive Wavelet-based Strategy for Real-Time Cochlear Implant Speech Processing on PDA Platforms

    PubMed Central

    Gopalakrishna, Vanishree; Kehtarnavaz, Nasser; Loizou, Philipos C.

    2011-01-01

    This paper presents a wavelet-based speech coding strategy for cochlear implants. In addition, it describes the real-time implementation of this strategy on a PDA platform. Three wavelet packet decomposition tree structures are considered and their performance in terms of computational complexity, spectral leakage, fixed-point accuracy, and real-time processing are compared to other commonly used strategies in cochlear implants. A real-time mechanism is introduced for updating the wavelet coefficients recursively. It is shown that the proposed strategy achieves higher analysis rates than the existing strategies while being able to run in real-time on a PDA platform. In addition, it is shown that this strategy leads to a lower amount of spectral leakage. The PDA implementation is made interactive to allow users to easily manipulate the parameters involved and study their effects. PMID:20403778

  3. A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

    NASA Astrophysics Data System (ADS)

    Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

    2014-09-01

    This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

  4. Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

    NASA Astrophysics Data System (ADS)

    Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

    2013-12-01

    The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

  5. Wavelet series method for reconstruction and spectral estimation of laser Doppler velocimetry data

    NASA Astrophysics Data System (ADS)

    Jaunet, Vincent; Collin, Erwan; Bonnet, Jean-Paul

    2012-01-01

    Many techniques have been developed in order to obtain spectral density function from randomly sampled data, such as the computation of a slotted autocovariance function. Nevertheless, one may be interested in obtaining more information from laser Doppler signals than a spectral content, using more or less complex computations that can be easily conducted with an evenly sampled signal. That is the reason why reconstructing an evenly sampled signal from the original LDV data is of interest. The ability of a wavelet-based technique to reconstruct the signal with respect to statistical properties of the original one is explored, and spectral content of the reconstructed signal is given and compared with estimated spectral density function obtained through classical slotting technique. Furthermore, LDV signals taken from a screeching jet are reconstructed in order to perform spectral and bispectral analysis, showing the ability of the technique in recovering accurate information's with only few LDV samples.

  6. Estimating the neutrally buoyant energy density of a Rankine-cycle/fuel-cell underwater propulsion system

    NASA Astrophysics Data System (ADS)

    Waters, Daniel F.; Cadou, Christopher P.

    2014-02-01

    A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (?15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

  7. Estimation and monitoring of product aesthetics: application to manufacturing of \\

    Microsoft Academic Search

    J. Jay Liu; John F. MacGregor

    2006-01-01

    A new machine vision approach for quantitatively estimating and monitoring the appearance and aesthetics of manufactured products is presented. The approach is composed of three steps: (1) wavelet-based textural feature extraction from product images, (2) estimation of measures of the product appearance through subspace projection of the textural features, and (3) monitoring of the appearance in the latent variable subspace

  8. Estimations of electric field effects on the oxygen reduction reaction based on the density functional theory.

    PubMed

    Karlberg, G S; Rossmeisl, J; Nørskov, J K

    2007-10-01

    By varying the external electric field in density functional theory (DFT) calculations we have estimated the impact of the local electric field in the electric double layer on the oxygen reduction reaction (ORR). Potentially, including the local electric field could change adsorption energies and barriers substantially, thereby affecting the reaction mechanism predicted for ORR on different metals. To estimate the effect of local electric fields on ORR we combine the DFT results at various external electric field strengths with a previously developed model of electrochemical reactions which fully accounts for the effect of the electrode potential. We find that the local electric field only slightly affects the output of the model. Hence, the general picture obtained without inclusion of the electric field still persists. However, for accurate predictions at oxygen reduction potentials close to the volcano top local electric field effects may be of importance. PMID:17878993

  9. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  10. Exploring neural directed interactions with transfer entropy based on an adaptive kernel density estimator

    PubMed Central

    Zuo, Kai; Bellanger, Jean-Jacques; Yang, Chunfeng; Shu, Huazhong; Le Bouquin Jeannes, Regine

    2013-01-01

    This paper aims at estimating causal relationships between signals to detect flow propagation in autoregressive and physiological models. The main challenge of the ongoing work is to discover whether neural activity in a given structure of the brain influences activity in another area during epileptic seizures. This question refers to the concept of effective connectivity in neuroscience, i.e. to the identification of information flows and oriented propagation graphs. Past efforts to determine effective connectivity rooted to Wiener causality definition adapted in a practical form by Granger with autoregressive models. A number of studies argue against such a linear approach when nonlinear dynamics are suspected in the relationship between signals. Consequently, nonlinear nonparametric approaches, such as transfer entropy (TE), have been introduced to overcome linear methods limitations and promoted in many studies dealing with electrophysiological signals. Until now, even though many TE estimators have been developed, further improvement can be expected. In this paper, we investigate a new strategy by introducing an adaptive kernel density estimator to improve TE estimation. PMID:24110694

  11. Maximum crosstalk estimation and modeling of electromagnetic radiation from PCB/high-density connector interfaces

    NASA Astrophysics Data System (ADS)

    Halligan, Matthew Scott

    This dissertation explores two topics pertinent to electromagnetic compatibility research: maximum crosstalk estimation in weakly coupled transmission lines and modeling of electromagnetic radiation resulting from printed circuit board/high-density connector interfaces. Despite an ample supply of literature devoted to the study of crosstalk, little research has been performed to formulate maximum crosstalk estimates when signal lines are electrically long. Paper one illustrates a new maximum crosstalk estimate that is based on a mathematically rigorous, integral formulation, where the transmission lines can be lossy and in an inhomogeneous media. Paper two provides a thorough comparison and analysis of the newly derived maximum crosstalk estimates with an estimate derived by another author. In paper two the newly derived estimates in paper one are shown to be more robust because they can estimate the maximum crosstalk with fewer and less restrictive assumptions. One current industry challenge is the lack of robust printed circuit board connector models and methods to quantify radiation from these connectors. To address this challenge, a method is presented in paper three to quantify electromagnetic radiation using network parameters and power conservation, assuming the only losses at a printed circuit board/connector interface are due to radiation. Some of the radiating structures are identified and the radiation physics explored for the studied connector in paper three. Paper four expands upon the radiation modeling concepts in paper three by extending radiation characterization when material losses and multiple signals may be present at the printed circuit board/connector interface. The resulting radiated power characterization method enables robust deterministic and statistical analyses of the radiated power from printed circuit board connectors. Paper five shows the development of a statistical radiated power estimate based on the radiation characterization method presented in paper four. Maximum radiated power estimates are shown using the Markov and Chebyshev inequalities to predict a radiated power limit. A few maximum radiated power limits are proposed that depend on the amount of known information about the radiation characteristics of a printed circuit board connector.

  12. Similarities between Line Fishing and Baited Stereo-Video Estimations of Length-Frequency: Novel Application of Kernel Density Estimates

    PubMed Central

    Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.

    2012-01-01

    Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID:23209547

  13. Density

    NSDL National Science Digital Library

    Day, Martha Marie

    This web page introduces the concepts of density and buoyancy. The discovery in ancient Greece by Archimedes is described. The densities of various materials are given and temperature effects introduced. Links are provided to news and other resources related to mass density. This is part of the Vision Learning collection of short online modules covering topics in a broad range of science and math topics.

  14. TEMPERATURE AND DENSITY ESTIMATES OF EXTREME-ULTRAVIOLET FLARE RIBBONS DERIVED FROM TRACE DIFFRACTION PATTERNS

    SciTech Connect

    Krucker, Saem; Raftery, Claire L.; Hudson, Hugh S., E-mail: krucker@ssl.berkeley.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States)

    2011-06-10

    We report on Transition Region And Coronal Explorer 171 A observations of the GOES X20 class flare on 2001 April 2 that shows EUV flare ribbons with intense diffraction patterns. Between the 11th to 14th order, the diffraction patterns of the compact flare ribbon are dispersed into two sources. The two sources are identified as emission from the Fe IX line at 171.1 A and the combined emission from Fe X lines at 174.5, 175.3, and 177.2 A. The prominent emission of the Fe IX line indicates that the EUV-emitting ribbon has a strong temperature component near the lower end of the 171 A temperature response ({approx}0.6-1.5 MK). Fitting the observation with an isothermal model, the derived temperature is around 0.65 MK. However, the low sensitivity of the 171 A filter to high-temperature plasma does not provide estimates of the emission measure for temperatures above {approx}1.5 MK. Using the derived temperature of 0.65 MK, the observed 171 A flux gives a density of the EUV ribbon of 3 x 10{sup 11} cm{sup -3}. This density is much lower than the density of the hard X-ray producing region ({approx}10{sup 13} to 10{sup 14} cm{sup -3}) suggesting that the EUV sources, though closely related spatially, lie at higher altitudes.

  15. Delaunay Tessellation Field Estimator analysis of the PSCz local Universe: density field and cosmic flow

    NASA Astrophysics Data System (ADS)

    Romano-Díaz, Emilio; van de Weygaert, Rien

    2007-11-01

    We apply the Delaunay Tessellation Field Estimator (DTFE) to reconstruct and analyse the matter distribution and cosmic velocity flows in the local Universe on the basis of the PSCz galaxy survey. The prime objective of this study is the production of optimal resolution 3D maps of the volume-weighted velocity and density fields throughout the nearby universe, the basis for a detailed study of the structure and dynamics of the cosmic web at each level probed by underlying galaxy sample. Fully volume-covering 3D maps of the density and (volume-weighted) velocity fields in the cosmic vicinity, out to a distance of 150h-1Mpc, are presented. Based on the Voronoi and Delaunay tessellation defined by the spatial galaxy sample, DTFE involves the estimate of density values on the basis of the volume of the related Delaunay tetrahedra and the subsequent use of the Delaunay tessellation as natural multidimensional (linear) interpolation grid for the corresponding density and velocity fields throughout the sample volume. The linearized model of the spatial galaxy distribution and the corresponding peculiar velocities of the PSCz galaxy sample, produced by Branchini et al., forms the input sample for the DTFE study. The DTFE maps reproduce the high-density supercluster regions in optimal detail, both their internal structure as well as their elongated or flattened shape. The corresponding velocity flows trace the bulk and shear flows marking the region extending from the Pisces-Perseus supercluster, via the Local Superclusters, towards the Hydra-Centaurus and the Shapley concentration. The most outstanding and unique feature of the DTFE maps is the sharply defined radial outflow regions in and around underdense voids, marking the dynamical importance of voids in the local Universe. The maximum expansion rate of voids defines a sharp cut-off in the DTFE velocity divergence probability distribution function. We found that on the basis of this cut-off DTFE manages to consistently reproduce the value of ?m ~ 0.35 underlying the linearized velocity data set.

  16. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.

    PubMed

    Dorazio, Robert M

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325

  17. Bayes and Empirical Bayes Estimators of Abundance and Density from Spatial Capture-Recapture Data

    PubMed Central

    Dorazio, Robert M.

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325

  18. Integration of Self-Organizing Map (SOM) and Kernel Density Estimation (KDE) for network intrusion detection

    NASA Astrophysics Data System (ADS)

    Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

    2009-09-01

    This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

  19. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

    USGS Publications Warehouse

    Dorazio, Robert M.

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

  20. Comparison of breast percent density estimation from raw versus processed digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  1. Simulation of Electron Cloud Density Distributions in RHIC Dipoles at Injection and Transition and Estimates for Scrubbing Times

    SciTech Connect

    He,P.; Blaskiewicz, M.; Fischer, W.

    2009-01-02

    In this report we summarize electron-cloud simulations for the RHIC dipole regions at injection and transition to estimate if scrubbing over practical time scales at injection would reduce the electron cloud density at transition to significantly lower values. The lower electron cloud density at transition will allow for an increase in the ion intensity.

  2. 3. Reductions in crown density were estimated in 5% classes by reference either to a standard set of

    E-print Network

    3. Reductions in crown density were estimated in 5% classes by reference either to a standard set in the geographical interpretation of results. THE 1997 RESULTS 5. The crown density results, using both methods in crown condition that have taken place since 1987 by recording the proportion of trees in which

  3. Robust estimation of the self-similarity parameter in network traffic using wavelet transform

    Microsoft Academic Search

    Haipeng Shen; Zhengyuan Zhu; Thomas C. M. Lee

    This article studies the problem of estimating the self-similarity parameter of network traffic traces. A robust wavelet- based procedure is proposed for this estimation task of deriving estimates that are less sensitive to some commonly encountered non-stationary traffic conditions, such as sudden level shifts and breaks. Two main ingredients of the proposed procedure are: (i) the application of a robust

  4. Density Estimation and Bump-Hunting by the Penalized Likelihood Method Exemplified by Scattering and Meteorite Data

    Microsoft Academic Search

    I. J. Good; R. A. Gaskins

    1980-01-01

    The (maximum) penalized-likelihood method of probability density estimation and bump-hunting is improved and exemplified by applications to scattering and chondrite data. We show how the hyperparameter in the method can be satisfactorily estimated by using statistics of goodness of fit. A Fourier expansion is found to be usually more expeditious than a Hermite expansion but a compromise is useful. The

  5. Estimating forest canopy density of tropical mixed deciduous vegetation using Landsat data: a comparison of three classification approaches

    Microsoft Academic Search

    Myat Su Mon; Nobuya Mizoue; Naing Zaw Htun; Tsuyoshi Kajisa; Shigejiro Yoshida

    2011-01-01

    Although a number of image classification approaches are available to estimate forest canopy density (FCD) using satellite data, assessment of their relative performances with tropical mixed deciduous vegetation is lacking. This study compared three image classification approaches – maximum likelihood classification (MLC), multiple linear regression (MLR) and FCD Mapper – in estimating the FCD of mixed deciduous forest in Myanmar.

  6. Estimating forest canopy density of tropical mixed deciduous vegetation using Landsat data: a comparison of three classification approaches

    Microsoft Academic Search

    Myat Su Mon; Nobuya Mizoue; Naing Zaw Htun; Tsuyoshi Kajisa; Shigejiro Yoshida

    2012-01-01

    Although a number of image classification approaches are available to estimate forest canopy density (FCD) using satellite data, assessment of their relative performances with tropical mixed deciduous vegetation is lacking. This study compared three image classification approaches – maximum likelihood classification (MLC), multiple linear regression (MLR) and FCD Mapper – in estimating the FCD of mixed deciduous forest in Myanmar.

  7. Local Convergence of Empirical Measures in the Random Censorship Situation with Application to Density and Rate Estimators

    Microsoft Academic Search

    Helmut Schafer

    1986-01-01

    In this paper, we study the local deviations of the empirical measure defined by the Kaplan-Meier (1958) estimator for the survival function. The results are applied to derive best rates of convergence for kernel estimators for the density and hazard rate function in the random censorship model.

  8. Estimation of material fluxes in an estuarine cross section: A critical analysis of spatial measurement density and errors

    Microsoft Academic Search

    Bjtirn Kjerfve; L. HAROLD STEVENSON; JEFFREY A. PROEHL; THOMAS H. CHRZANOWSKI; WILEY M. KITCHENS

    1981-01-01

    Estuarine budget studies often suffer from uncertainties of net flux estimates in view of large temporal and spatial variabilities. Optimum spatial measurement density and material flux errors for a reasonably well mixed estuary were estimated by sampling 10 stations from surface to bottom simultaneously every hour for two tidal cycles in a 320-m-wide cross section in North Inlet, South Carolina.

  9. The influences of census technique on estimating indices of macrofaunal population density in the temperate rocky subtidal zone

    Microsoft Academic Search

    MDJ Sayer; C Poonian

    2007-01-01

    Several studies have attempted to compare subtidal animal population estimates obtained in a variety of ways using SCUBA diving and have reported a lot of variation between the estimates obtained. This study investigated individually scale-, tidal-, equipment- and observer-induced variation through analysis of animal population density indices obtained using a number of techniques based on SCUBA diver visual survey. The

  10. Density

    NSDL National Science Digital Library

    Targeting a middle and high school population, this web page has an introduction to the concept of density. It is an appendix of a larger site called, MathMol (Mathematics and Molecules), designed as an introduction to molecular modeling.

  11. Estimation of effective scatterer size and number density in near-infrared tomography

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2007-05-01

    Light scattering from tissue originates from the fluctuations in intra-cellular and extra-cellular components, so it is possible that macroscopic scattering spectroscopy could be used to quantify sub-microscopic structures. Both electron microscopy (EM) and optical phase contrast microscopy were used to study the origin of scattering from tissue. EM studies indicate that lipid-bound particle sizes appear to be distributed as a monotonic exponential function, with sub-micron structures dominating the distribution. Given assumptions about the index of refraction change, the shape of the scattering spectrum in the near infrared as measured through bulk tissue is consistent with what would be predicted by Mie theory with these particle size histograms. The relative scattering intensity of breast tissue sections (including 10 normal & 23 abnormal) were studied by phase contrast microscopy. Results show that stroma has higher scattering than epithelium tissue, and fat has the lowest values; tumor epithelium has lower scattering than the normal epithelium; stroma associated with tumor has lower scattering than the normal stroma. Mie theory estimation scattering spectra, was used to estimate effective particle size values, and this was applied retrospectively to normal whole breast spectra accumulated in ongoing clinical exams. The effective sizes ranged between 20 and 1400 nm, which are consistent with subcellular organelles and collagen matrix fibrils discussed previously. This estimation method was also applied to images from cancer regions, with results indicating that the effective scatterer sizes of region of interest (ROI) are pretty close to that of the background for both the cancer patients and benign patients; for the effective number density, there is a big difference between the ROI and background for the cancer patients, while for the benign patients, the value of ROI are relatively close to that of the background. Ongoing MRI-guided NIR studies indicated that the fibroglandular tissue had smaller effective scatterer size and larger effective number density than the adipose tissue. The studies in this thesis provide an interpretive approach to estimate average morphological scatter parameters of bulk tissue, through interpretation of diffuse scattering as coming from effective Mie scatterers.

  12. Wavelet-based neural network prediction of plasma etch profile nonuniformity

    Microsoft Academic Search

    B. Kim; S. Kim; K. Kim

    2003-01-01

    Profiles of plasma etching have conventionally been characterized by approximating the slope with an angle or anisotropy. This is critically limited in that detailed variations on the profile surface are inevitably neglected. In current high density plasma etching, this becomes more serious since unexpected microfeatures such as bowing or microtrenching are frequently formed along the profile surface.

  13. Extraordinarily low density of hepatitis C virus estimated by sucrose density gradient centrifugation and the polymerase chain reaction

    Microsoft Academic Search

    Hideaki Miyamoto; Hiroaki Okamoto; Koei Sato; Takeshi Tanaka; Shunji Mishiro

    1992-01-01

    The genomic RNA of hepatitis C virus (HCV) in the plasma of volunteer blood donors was detected by using the polymerase chain reaction in a fraction of density 1-08 g\\/ml from sucrose density gradient equilibrium centrifugation. When the fraction was treated with the detergent NP40 and recentrifuged in sucrose, the HCV RNA banded at 1.25g\\/ml. Assuming that NP40 removed a

  14. Statistical estimation of femur micro-architecture using optimal shape and density predictors.

    PubMed

    Lekadir, Karim; Hazrati-Marangalou, Javad; Hoogendoorn, Corné; Taylor, Zeike; van Rietbergen, Bert; Frangi, Alejandro F

    2015-02-26

    The personalization of trabecular micro-architecture has been recently shown to be important in patient-specific biomechanical models of the femur. However, high-resolution in vivo imaging of bone micro-architecture using existing modalities is still infeasible in practice due to the associated acquisition times, costs, and X-ray radiation exposure. In this study, we describe a statistical approach for the prediction of the femur micro-architecture based on the more easily extracted subject-specific bone shape and mineral density information. To this end, a training sample of ex vivo micro-CT images is used to learn the existing statistical relationships within the low and high resolution image data. More specifically, optimal bone shape and mineral density features are selected based on their predictive power and used within a partial least square regression model to estimate the unknown trabecular micro-architecture within the anatomical models of new subjects. The experimental results demonstrate the accuracy of the proposed approach, with average errors of 0.07 for both the degree of anisotropy and tensor norms. PMID:25624314

  15. Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion

    NASA Technical Reports Server (NTRS)

    Wilson, L.; Self, S.

    1980-01-01

    Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.

  16. Image inpainting using wavelet-based inter- and intra-scale dependency

    Microsoft Academic Search

    Dongwook Cho; Tien D. Bui

    2008-01-01

    Image inpainting or completion is a technique to restore a damaged image. Recently various approaches have been proposed. Wavelet transform has been used for various image analysis problems due to its nice multi-resolution properties and decoupling characteristics. We propose to utilize the advantages of wavelet transforms for image inpainting. Unlike other inpainting algorithms, we can expect better global structure estimation

  17. Wavelet-based Analysis of Wavelike Structures in the Ionospheric F-Region Electron Concentration

    Microsoft Academic Search

    P. Sauli; P. Abry; J. Boska

    2002-01-01

    The present work provides a contribution to the study of short-term variabilities (from 15 minutes to 4 hours) observed in F region of ionosphere and due to acoustic-gravity waves (AGW). To this end, electron densities are measured in Pruhonice observatory (49.9N, 14.5E - vertical ionospheric sounding with repetition time 5 minutes and 1 minute). From data, collected during several campaigns

  18. Estimating basin thickness using a high-density passive-source geophone array

    NASA Astrophysics Data System (ADS)

    O'Rourke, C. T.; Sheehan, A. F.; Erslev, E. A.; Miller, K. C.

    2014-09-01

    In 2010 an array of 834 single-component geophones was deployed across the Bighorn Mountain Range in northern Wyoming as part of the Bighorn Arch Seismic Experiment (BASE). The goal of this deployment was to test the capabilities of these instruments as recorders of passive-source observations in addition to active-source observations for which they are typically used. The results are quite promising, having recorded 47 regional and teleseismic earthquakes over a two-week deployment. These events ranged from magnitude 4.1 to 7.0 (mb) and occurred at distances up to 10°. Because these instruments were deployed at ca. 1000 m spacing we were able to resolve the geometries of two major basins from the residuals of several well-recorded teleseisms. The residuals of these arrivals, converted to basinal thickness, show a distinct westward thickening in the Bighorn Basin that agrees with industry-derived basement depth information. Our estimates of thickness in the Powder River Basin do not match industry estimates in certain areas, likely due to localized high-velocity features that are not included in our models. Thus, with a few cautions, it is clear that high-density single-component passive arrays can provide valuable constraints on basinal geometries, and could be especially useful where basinal geometry is poorly known.

  19. Estimation of power spectral density from laser Doppler data via linear interpolation and deconvolution

    NASA Astrophysics Data System (ADS)

    Moreau, S.; Plantier, G.; Valière, J.-C.; Bailliet, H.; Simon, L.

    2011-01-01

    Spectral estimation of irregularly sampled velocity data issued from Laser Doppler Anemometry measurements is considered in this paper. A new method is proposed based on linear interpolation followed by a deconvolution procedure. In this method, the analytic expression of the autocorrelation function of the interpolated data is expressed as a linear function of the autocorrelation function of the data to be estimated. For the analysis of both simulated and experimental data, the results of the proposed method is compared with the one of the reference methods in LDA: refinement of autocorrelation function of sample-and-hold interpolated signal method given by Nobach et al. (Exp Fluids 24:499-509, 1998), refinement of power spectral density of sample-and-hold interpolated signal method given by Simon and Fitzpatrick (Exp Fluids 37:272-280, 2004) and fuzzy slotting technique with local normalization and weighting algorithm given by Nobach (Exp Fluids 32:337-345, 2002). Based on these results, it is concluded that the performances of the proposed method are better than the one of the other methods, especially for what concerns bias and variance.

  20. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  1. Accuracy of estimation of genomic breeding values in pigs using low-density genotypes and imputation.

    PubMed

    Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

    2014-04-01

    Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation. PMID:24531728

  2. Estimating autozygosity from high-throughput information: effects of SNP density and genotyping errors

    PubMed Central

    2013-01-01

    Background Runs of homozygosity are long, uninterrupted stretches of homozygous genotypes that enable reliable estimation of levels of inbreeding (i.e., autozygosity) based on high-throughput, chip-based single nucleotide polymorphism (SNP) genotypes. While the theoretical definition of runs of homozygosity is straightforward, their empirical identification depends on the type of SNP chip used to obtain the data and on a number of factors, including the number of heterozygous calls allowed to account for genotyping errors. We analyzed how SNP chip density and genotyping errors affect estimates of autozygosity based on runs of homozygosity in three cattle populations, using genotype data from an SNP chip with 777 972 SNPs and a 50 k chip. Results Data from the 50 k chip led to overestimation of the number of runs of homozygosity that are shorter than 4 Mb, since the analysis could not identify heterozygous SNPs that were present on the denser chip. Conversely, data from the denser chip led to underestimation of the number of runs of homozygosity that were longer than 8 Mb, unless the presence of a small number of heterozygous SNP genotypes was allowed within a run of homozygosity. Conclusions We have shown that SNP chip density and genotyping errors introduce patterns of bias in the estimation of autozygosity based on runs of homozygosity. SNP chips with 50 000 to 60 000 markers are frequently available for livestock species and their information leads to a conservative prediction of autozygosity from runs of homozygosity longer than 4 Mb. Not allowing heterozygous SNP genotypes to be present in a homozygosity run, as has been advocated for human populations, is not adequate for livestock populations because they have much higher levels of autozygosity and therefore longer runs of homozygosity. When allowing a small number of heterozygous calls, current software does not differentiate between situations where these calls are adjacent and therefore indicative of an actual break of the run versus those where they are scattered across the length of the homozygous segment. Simple graphical tests that are used in this paper are a current, yet tedious solution. PMID:24168655

  3. On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

    2015-02-01

    We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions for which there can be uniform resolvent estimates for , , where is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from to the spectrum of . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for . Ours is based on the related Hadamard parametrices for , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

  4. Non-RAM-based architectural designs of wavelet-based digital systems based on novel nonlinear I\\/O data space transformations

    Microsoft Academic Search

    Dongming Peng; Mi Lu

    2005-01-01

    The designs of application specific integrated circuits and\\/or multiprocessor systems are usually required in order to improve the performance of multidimensional applications such as digital-image processing and computer vision. Wavelet-based algorithms have been found promising among these applications due to the features of hierarchical signal analysis and multiresolution analysis. Because of the large size of multidimensional input data, off-chip random

  5. Long-range dependence in the volatility of commodity futures prices: Wavelet-based evidence

    NASA Astrophysics Data System (ADS)

    Power, Gabriel J.; Turvey, Calum G.

    2010-01-01

    Commodity futures have long been used to facilitate risk management and inventory stabilization. The study of commodity futures prices has attracted much attention in the literature because they are highly volatile and because commodities represent a large proportion of the export value in many developing countries. Previous research has found apparently contradictory findings about the presence of long memory or more generally, long-range dependence. This note investigates the nature of long-range dependence in the volatility of 14 energy and agricultural commodity futures price series using the improved Hurst coefficient ( H) estimator of Abry, Teyssière and Veitch. This estimator is motivated by the ability of wavelets to detect self-similarity and also enables a test for the stability of H. The results show evidence of long-range dependence for all 14 commodities and of a non-stationary H for 9 of 14 commodities.

  6. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  7. New method for designing two-channel causal stable IIR perfect reconstruction filter banks and wavelet bases

    NASA Astrophysics Data System (ADS)

    Mao, J. S.; Chan, S. C.; Ho, Ka L.

    2000-10-01

    A new method for designing two-channel causal stable IIR PR filter banks and wavelet bases is proposed. It is based on the structure previously proposed by Phoong et al. (1995). Such a filter bank is parameterized by two functions (alpha) (z) and (beta) (z), which can be chosen as an all-pass function to obtain IIR filterbanks with very high stopband attenuation. One of the problems with this choice is that a bump of about 4 dB always exists near the transition band of the analysis and synthesis filters. The stopband attenuation of the high-pass analysis filter is also 10 dB lower than that of the low-pass filter. By choosing (beta) (z) and (alpha) (z) as an all-pass function and a type-II linear- phase finite impulse response function, respectively, the bumping can be significantly suppressed. In addition, the stopband attenuation of the high-pass filter can be controlled easily. The design problem is formulated as a polynomial approximation problem and is solved efficiently by the Remez exchange algorithm. The extension of this method to the design of a class of IIR wavelet basis is also considered.

  8. EMPIRICAL MODE DECOMPOSITION, FRACTIONAL GAUSSIAN NOISE AND HURST EXPONENT ESTIMATION

    E-print Network

    Gonçalves, Paulo

    analysis and statisti- cal characterization of the obtained modes reveal an equivalent filter bank- fulness of this technique for estimating scaling exponents. New EMD-based methods are proposed and quantitatively compared to classical wavelet-based ones. 2. EMD BASICS Basically, Empirical Mode Decomposition

  9. The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions

    USGS Publications Warehouse

    Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

    2002-01-01

    The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

  10. X-Ray Methods to Estimate Breast Density Content in Breast Tissue

    NASA Astrophysics Data System (ADS)

    Maraghechi, Borna

    This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ? fibrous, lucite ? cancer and polyet ? fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating techniques and one photon counting technique. Contrast, signal to noise ratio (SNR) and pixel values between different regions of interest were analyzed. The two basis function method and two of the energy integrating methods (calibration, beam hardening correction) gave the highest and more linear curves for contrast and SNR.

  11. Estimated uncertainty of calculated liquefied natural gas density from a comparison of NBS and Gaz de France densimeter test facilities

    SciTech Connect

    Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

    1982-12-16

    Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

  12. Wavelet based error correction and predictive uncertainty of a hydrological forecasting system

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Pappenberger, Florian; Thielen, Jutta; de Roo, Ad

    2010-05-01

    River discharge predictions most often show errors with scaling properties of unknown source and statistical structure that degrade the quality of forecasts. This is especially true for lead-time ranges greater then a few days. Since the European Flood Alert System (EFAS) provides discharge forecasts up to ten days ahead, it is necessary to take these scaling properties into consideration. For example the range of scales for the error that occurs at the spring time will be caused by long lasting snowmelt processes, and is by far larger then the error, that appears during the summer period and is caused by convective rain fields of short duration. The wavelet decomposition is an excellent way to provide the detailed model error at different levels in order to estimate the (unobserved) state variables more precisely. A Vector-AutoRegressive model with eXogenous input (VARX) is fitted for the different levels of wavelet decomposition simultaneously and after predicting the next time steps ahead for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The Bayesian Uncertainty Processor (BUP) developed by Krzysztofowicz is an efficient method to estimate the full predictive uncertainty, which is derived by integrating the hydrological model uncertainty and the meteorological input uncertainty. A hydrological uncertainty processor has been applied to the error corrected discharge series at first in order to derive the predictive conditional distribution under the hypothesis that there is no input uncertainty. The uncertainty of the forecasted meteorological input forcing the hydrological model is derived from the combination of deterministic weather forecasts and ensemble predictions systems (EPS) and the Input Processor maps this input uncertainty into the output uncertainty under the hypothesis that there is no hydrological uncertainty. The main objective of this Bayesian forecasting system is to get an estimate of the conditional probability distribution of the future observed quantity (i.e. the discharge in the next days) given the available sample of model predictions by integrating optimally the hydrological and the input uncertainty. At the moment this integrated system of error correction and predictive uncertainty estimation has been tested and set up for operational use at some stations in Central Europe only, but will be extended to the EFAS domain within the near future.

  13. Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

    NASA Technical Reports Server (NTRS)

    Grau, David

    2012-01-01

    This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.

  14. Density

    NSDL National Science Digital Library

    2008-01-01

    This PheT interactive, downloadable simulation allows students toDiscover the relationship between mass, volume and density by weighing and submerging various materials under water. Do objects like aluminum, Styrofoam, and wood float or sink? Can you identify all the mystery objects by weighing them and submerging them underwater to measure their volumes? Sample earning goals, teaching ideas, and translated versions are available.

  15. Lattice potential energy estimation for complex ionic salts from density measurements.

    PubMed

    Jenkins, H Donald Brooke; Tudela, David; Glasser, Leslie

    2002-05-01

    This paper is one of a series exploring simple approaches for the estimation of lattice energy of ionic materials, avoiding elaborate computation. The readily accessible, frequently reported, and easily measurable (requiring only small quantities of inorganic material) property of density, rho(m), is related, as a rectilinear function of the form (rho(m)/M(m))(1/3), to the lattice energy U(POT) of ionic materials, where M(m) is the chemical formula mass. Dependence on the cube root is particularly advantageous because this considerably lowers the effects of any experimental errors in the density measurement used. The relationship that is developed arises from the dependence (previously reported in Jenkins, H. D. B.; Roobottom, H. K.; Passmore, J.; Glasser, L. Inorg. Chem. 1999, 38, 3609) of lattice energy on the inverse cube root of the molar volume. These latest equations have the form U(POT)/kJ mol(-1) = gamma(rho(m)/M(m))(1/3) + delta, where for the simpler salts (i.e., U(POT)/kJ mol(-1) < 5000 kJ mol(-1)), gamma and delta are coefficients dependent upon the stoichiometry of the inorganic material, and for materials for which U(POT)/kJ mol(-1) > 5000, gamma/kJ mol(-1) cm = 10(-7) AI(2IN(A))(1/3) and delta/kJ mol(-1) = 0 where A is the general electrostatic conversion factor (A = 121.4 kJ mol(-1)), I is the ionic strength = 1/2 the sum of n(i)z(i)(2), and N(A) is Avogadro's constant. PMID:11978099

  16. Density estimators for the convolution of discrete and continuous random variables

    E-print Network

    Wefelmeyer, Wolfgang

    be estimated by a kernel estimator based on the sum of each pair of observations. Since the two components of a kernel estimator of the continuous component with an empirical estimator of the discrete component. We estimator, and the same asymptotic bias, but a much smaller asymptotic variance. We also show how pointwise

  17. Analysis of instrument self-noise and microseismic event detection using power spectral density estimates

    NASA Astrophysics Data System (ADS)

    Vaezi, Y.; van der Baan, M.

    2014-05-01

    Reliability of microseismic interpretations is very much dependent on how robustly microseismic events are detected and picked. Various event detection algorithms are available but detection of weak events is a common challenge. Apart from the event magnitude, hypocentral distance, and background noise level, the instrument self-noise can also act as a major constraint for the detection of weak microseismic events in particular for borehole deployments in quiet environments such as below 1.5-2 km depths. Instrument self-noise levels that are comparable or above background noise levels may not only complicate detection of weak events at larger distances but also challenge methods such as seismic interferometry which aim at analysis of coherent features in ambient noise wavefields to reveal subsurface structure. In this paper, we use power spectral densities to estimate the instrument self-noise for a borehole data set acquired during a hydraulic fracturing stimulation using modified 4.5-Hz geophones. We analyse temporal changes in recorded noise levels and their time-frequency variations for borehole and surface sensors and conclude that instrument noise is a limiting factor in the borehole setting, impeding successful event detection. Next we suggest that the variations of the spectral powers in a time-frequency representation can be used as a new criterion for event detection. Compared to the common short-time average/long-time average method, our suggested approach requires a similar number of parameters but with more flexibility in their choice. It detects small events with anomalous spectral powers with respect to an estimated background noise spectrum with the added advantage that no bandpass filtering is required prior to event detection.

  18. Estimation oftheConcentration of Low-Density Lipoprotein Cholesterol inPlasma, Without UseofthePreparative Ultracentrifuge

    Microsoft Academic Search

    William T. Friedewald; Robert I. Levy; Donald S. Fredrickson

    1972-01-01

    A method for estimating the cholesterol content of the serum low-density lipoprotein fraction (Sf- 0.20)is presented. The method involves measure- ments of fasting plasma total cholesterol, tri- glyceride, and high-density lipoprotein cholesterol concentrations, none of which requires the use of the preparative ultracentrifuge. Cornparison of this suggested procedure with the more direct procedure, in which the ultracentrifuge is used, yielded

  19. Estimation of ocelot density in the pantanal using capture-recapture analysis of camera-trapping data

    USGS Publications Warehouse

    Trolle, M.; Kery, M.

    2003-01-01

    Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

  20. Estimation of tiger densities in the tropical dry forests of Panna, Central India, using photographic capture-recapture sampling

    USGS Publications Warehouse

    Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.

    2004-01-01

    Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

  1. Novelty detection by multivariate kernel density estimation and growing neural gas algorithm

    NASA Astrophysics Data System (ADS)

    Fink, Olga; Zio, Enrico; Weidmann, Ulrich

    2015-01-01

    One of the underlying assumptions when using data-based methods for pattern recognition in diagnostics or prognostics is that the selected data sample used to train and test the algorithm is representative of the entire dataset and covers all combinations of parameters and conditions, and resulting system states. However in practice, operating and environmental conditions may change, unexpected and previously unanticipated events may occur and corresponding new anomalous patterns develop. Therefore for practical applications, techniques are required to detect novelties in patterns and give confidence to the user on the validity of the performed diagnosis and predictions. In this paper, the application of two types of novelty detection approaches is compared: a statistical approach based on multivariate kernel density estimation and an approach based on a type of unsupervised artificial neural network, called the growing neural gas (GNG). The comparison is performed on a case study in the field of railway turnout systems. Both approaches demonstrate their suitability for detecting novel patterns. Furthermore, GNG proves to be more flexible, especially with respect to dimensionality of the input data and suitability for online learning.

  2. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  3. Wavelet-based data and solution compression for efficient image reconstruction in fluorescence diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Correia, Teresa; Rudge, Timothy; Koch, Maximilian; Ntziachristos, Vasilis; Arridge, Simon

    2013-08-01

    Current fluorescence diffuse optical tomography (fDOT) systems can provide large data sets and, in addition, the unknown parameters to be estimated are so numerous that the sensitivity matrix is too large to store. Alternatively, iterative methods can be used, but they can be extremely slow at converging when dealing with large matrices. A few approaches suitable for the reconstruction of images from very large data sets have been developed. However, they either require explicit construction of the sensitivity matrix, suffer from slow computation times, or can only be applied to restricted geometries. We introduce a method for fast reconstruction in fDOT with large data and solution spaces, which preserves the resolution of the forward operator whilst compressing its representation. The method does not require construction of the full matrix, and thus allows storage and direct inversion of the explicitly constructed compressed system matrix. The method is tested using simulated and experimental data. Results show that the fDOT image reconstruction problem can be effectively compressed without significant loss of information and with the added advantage of reducing image noise.

  4. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

  5. MEASUREMENT OF OAK TREE DENSITY WITH LANDSAT TM DATA FOR ESTIMATING BIOGENIC ISOPRENE EMISSIONS IN TENNESSEE, USA: JOURNAL ARTICLE

    EPA Science Inventory

    JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...

  6. An Air Traffic Prediction Model based on Kernel Density Estimation Yi Cao,1 Lingsong Zhang,2 and Dengfeng Sun3

    E-print Network

    Sun, Dengfeng

    through sophisticated flight dynamics [1]. However, for the Air Traffic Control Sys- tem Command Center at an Air Route Traffic Control Center (simply denoted as Center hereafter) level [2]. It forecasts aircraftAn Air Traffic Prediction Model based on Kernel Density Estimation Yi Cao,1 Lingsong Zhang,2

  7. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    NASA Astrophysics Data System (ADS)

    Ge, Shaokui; Smith, Richard G.; Jacovides, Constantinos P.; Kramer, Marc G.; Carruthers, Raymond I.

    2011-08-01

    Plants require solar radiation for photosynthesis and their growth is directly related to the amount received, assuming that other environmental parameters are not limiting. Therefore, precise estimation of photosynthetically active radiation (PAR) is necessary to enhance overall accuracies of plant growth models. This study aimed to explore the PAR radiant flux in the San Francisco Bay Area of northern California. During the growing season (March through August) for 2 years 2007-2008, the on-site magnitudes of photosynthetic photon flux densities (PPFD) were investigated and then processed at both the hourly and daily time scales. Combined with global solar radiation ( R S) and simulated extraterrestrial solar radiation, five PAR-related values were developed, i.e., flux density-based PAR (PPFD), energy-based PAR (PARE), from-flux-to-energy conversion efficiency (fFEC), and the fraction of PAR energy in the global solar radiation (fE), and a new developed indicator—lost PARE percentages (LPR)—when solar radiation penetrates from the extraterrestrial system to the ground. These PAR-related values indicated significant diurnal variation, high values occurring at midday, with the low values occurring in the morning and afternoon hours. During the entire experimental season, the overall mean hourly value of fFEC was found to be 2.17 ?mol J-1, while the respective fE value was 0.49. The monthly averages of hourly fFEC and fE at the solar noon time ranged from 2.15 in March to 2.39 ?mol J-1 in August and from 0.47 in March to 0.52 in July, respectively. However, the monthly average daily values were relatively constant, and they exhibited a weak seasonal variation, ranging from 2.02 mol MJ-1 and 0.45 (March) to 2.19 mol MJ-1 and 0.48 (June). The mean daily values of fFEC and fE at the solar noon were 2.16 mol MJ-1 and 0.47 across the entire growing season, respectively. Both PPFD and the ever first reported LPR showed strong diurnal patterns. However, they had opposite trends. PPFD was high around noon, resulting in low values of LPR during the same time period. Both were found to be highly correlated with global solar radiation R S, solar elevation angle h, and the clearness index K t. Using the best subset selection of variables, two parametric models were developed for estimating PPFD and LPR, which can easily be applied in radiometric sites, by recording only global solar radiation measurements. These two models were found to be involved with the most commonly measured global solar radiation ( R S) and two large-scale geometric parameters, i.e., extraterrestrial solar radiation and solar elevation. The models were therefore insensitive to local weather conditions such as temperature. In particular, with two test data sets collected in USA and Greece, it was verified that the models could be extended across different geographical areas, where they performed well. Therefore, these two hourly based models can be used to provide precise PAR-related values, such as those required for developing precise vegetation growth models.

  8. Estimation of tool pose based on force-density correlation during robotic drilling.

    PubMed

    Williamson, Tom M; Bell, Brett J; Gerber, Nicolas; Salas, Lilibeth; Zysset, Philippe; Caversaccio, Marco; Weber, Stefan

    2013-04-01

    The application of image-guided systems with or without support by surgical robots relies on the accuracy of the navigation process, including patient-to-image registration. The surgeon must carry out the procedure based on the information provided by the navigation system, usually without being able to verify its correctness beyond visual inspection. Misleading surrogate parameters such as the fiducial registration error are often used to describe the success of the registration process, while a lack of methods describing the effects of navigation errors, such as those caused by tracking or calibration, may prevent the application of image guidance in certain accuracy-critical interventions. During minimally invasive mastoidectomy for cochlear implantation, a direct tunnel is drilled from the outside of the mastoid to a target on the cochlea based on registration using landmarks solely on the surface of the skull. Using this methodology, it is impossible to detect if the drill is advancing in the correct direction and that injury of the facial nerve will be avoided. To overcome this problem, a tool localization method based on drilling process information is proposed. The algorithm estimates the pose of a robot-guided surgical tool during a drilling task based on the correlation of the observed axial drilling force and the heterogeneous bone density in the mastoid extracted from 3-D image data. We present here one possible implementation of this method tested on ten tunnels drilled into three human cadaver specimens where an average tool localization accuracy of 0.29 mm was observed. PMID:23269744

  9. Global Crust-Mantle Density Contrast Estimated from EGM2008, DTM2008, CRUST2.0, and ICE-5G

    NASA Astrophysics Data System (ADS)

    Tenzer, Robert; Hamayun; Novák, Pavel; Gladkikh, Vladislav; Vajda, Peter

    2012-09-01

    We compute globally the consolidated crust-stripped gravity disturbances/anomalies. These refined gravity field quantities are obtained from the EGM2008 gravity data after applying the topographic and crust density contrasts stripping corrections computed using the global topography/bathymetry model DTM2006.0, the global continental ice-thickness data ICE-5G, and the global crustal model CRUST2.0. All crust components density contrasts are defined relative to the reference crustal density of 2,670 kg/m3. We demonstrate that the consolidated crust-stripped gravity data have the strongest correlation with the crustal thickness. Therefore, they are the most suitable gravity data type for the recovery of the Moho density interface by means of the gravimetric modelling or inversion. The consolidated crust-stripped gravity data and the CRUST2.0 crust-thickness data are used to estimate the global average value of the crust-mantle density contrast. This is done by minimising the correlation between these refined gravity and crust-thickness data by adding the crust-mantle density contrast to the original reference crustal density of 2,670 kg/m3. The estimated values of 485 kg/m3 (for the refined gravity disturbances) and 481 kg/m3 (for the refined gravity anomalies) very closely agree with the value of the crust-mantle density contrast of 480 kg/m3, which is adopted in the definition of the Preliminary Reference Earth Model (PREM). This agreement is more likely due to the fact that our results of the gravimetric forward modelling are significantly constrained by the CRUST2.0 model density structure and crust-thickness data derived purely based on methods of seismic refraction.

  10. Improving liquid chromatography-tandem mass spectrometry determinations by modifying noise frequency spectrum between two consecutive wavelet-based low-pass filtering procedures.

    PubMed

    Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien

    2010-04-23

    This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. PMID:20227706

  11. Coronal electron density distributions estimated from deca-hectometer type II radio bursts and coronal mass ejections

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Ok; Moon, Yong-Jae; Lee, Jin-Yi; Lee, Kyoung-Sun; Kim, Rok-Soon

    2015-04-01

    In this study, we estimate coronal electron density distributions by analyzing DH type II radio observations based on the assumption: a DH type II radio burst is generated by the shock formed at a CME leading edge. For this, we consider 11 Wind/WAVES DH type II radio bursts (from 2000 to 2003 and from 2010 to 2012) associated with SOHO/LASCO limb CMEs using the following criteria: (1) the fundamental and second harmonic emission lanes are well identified in the frequency range of 1 to 14 MHz; (2) its associated CME is clearly identified at least twice in the LASCO-C2 or C3 field of view during the time of type II observation. For these events, we determine the lowest frequencies of their fundamental emission lanes and the heights of their leading edges. Coronal electron density distributions are obtained by minimizing the root mean square error between the observed heights of CME leading edges and the heights of DH type II radio bursts from assumed electron density distributions. We find that the estimated coronal electron density distribution range from 2.5 to 10.2-fold Saito’s coronal electron density models.

  12. Sensitivity analysis and density estimation for finite-time ruin probabilities

    E-print Network

    Privault, Nicolas

    due to new solvency regulations in Europe. This problem is closely related to that of density-continuous) density functions of infima of reserve processes commonly used in insurance. In particular we show, using words: Ruin probability, Malliavin calculus, integration by parts, insurance mathematics. MSC

  13. Estimates of volumetric bone density from projectional measurements improve the discriminatory capability of dual X-ray absorptiometry

    NASA Technical Reports Server (NTRS)

    Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

    1995-01-01

    To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.

  14. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  15. A field comparison of nested grid and trapping web density estimators

    USGS Publications Warehouse

    Jett, D.A.; Nichols, J.D.

    1987-01-01

    The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.

  16. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  17. Power spectral density estimation by spline smoothing in the frequency domain

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.

  18. Power spectral density estimation by spline smoothing in the frequency domain.

    NASA Technical Reports Server (NTRS)

    De Figueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-

  19. On the use of density kernels for concentration estimations within particle and puff dispersion models

    Microsoft Academic Search

    Peter de Haan

    1999-01-01

    Stochastic particle models are the state-of-science method for modelling atmospheric dispersion. They simulate the released pollutant by a large number of particles. In most particle models the concentrations are estimated by counting the number of particles in a rectangular volume (box counting). The effects of the choice of the width and of the position of these boxes on the estimated

  20. Population Indices Versus Correlated Density Estimates of Black-Footed Ferret Abundance

    Microsoft Academic Search

    Martin B. Grenier; Steven W. Buskirk; Richard Anderson-Sprecher

    2009-01-01

    Estimating abundance of carnivore populations is problematic because individuals typically are elusive, nocturnal, and dispersed across the landscape. Rare or endangered carnivore populations are even more difficult to estimate because of small sample sizes. Considering behavioral ecology of the target species can drastically improve survey efficiency and effectiveness. Previously, abundance of the black-footed ferret (Mustela nigripes) was monitored by spotlighting

  1. The estimation of the bispectral density function and the detection of periodicities in a signal

    Microsoft Academic Search

    T. Subba Rao; M. M. Gabr

    1988-01-01

    In a recent paper Subba Rao and Gabr (J. Time Ser. Anal. (1987), in press) considered the estimation of the spectrum and the inverse spectrum based on the method by Pisarenko (Geophys. J. Roy. Astronom. Soc. 28 (1972), 511-531). The asymptotic properties of these estimates were studied using the properties of Wishart matrices. In this paper we show how the

  2. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew [Temasek Laboratories, National University of Singapore, Singapore 117508 (Singapore); Wang Qingguo [Depa. Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (Singapore)

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  3. Correlation for the estimation of the density of fatty acid esters fuels and its implications. A proposed Biodiesel Cetane Index.

    PubMed

    Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

    2010-09-01

    Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853

  4. Estimating Density Dependence from Population Time Series Using Demographic Theory and Life?History Data

    Microsoft Academic Search

    R. Lande; S. Engen; F. Filli; E. Matthysen; H. Weimerskirch

    2002-01-01

    abstract:,For populations,with a density-dependent life history reproducing at discrete annual intervals, we analyze small or mod- erate fluctuations in population size around a stable equilibrium, which,is applicable to many,vertebrate populations. Using a life history having age at maturity a, with stochasticity and density de- pendence in adult recruitment and mortality, we derive a linearized autoregressive equation,with time lags from,1 to

  5. Density estimation and survey validation for swift fox Vulpes velox in Oklahoma

    Microsoft Academic Search

    Marc A. Criffield; Eric C. Hellgren; David M. LESLIE Jr

    2010-01-01

    The swift fox Vulpes velox Say, 1823, a small canid native to shortgrass prairie ecosystems of North America, has been the subject of enhanced conservation\\u000a and research interest because of restricted distribution and low densities. Previous studies have described distributions\\u000a of the species in the southern Great Plains, but data on density are required to evaluate indices of relative abundance

  6. Estimation of density and population size and recommendations for monitoring trends of Bahama parrots on Great Abaco and Great Inagua

    USGS Publications Warehouse

    Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.

    2005-01-01

    Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.

  7. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    PubMed

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368

  8. Estimation of Vegetation Aerodynamic Roughness of Natural Regions Using Frontal Area Density Determined from Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Crago, Richard

    1994-01-01

    Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.

  9. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error 1

    PubMed Central

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2011-01-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

  10. Estimating the density of femoral head trabecular bone from hip fracture patients using computed tomography scan data.

    PubMed

    Vivanco, Juan F; Burgers, Travis A; García-Rodríguez, Sylvana; Crookshank, Meghan; Kunz, Manuela; MacIntyre, Norma J; Harrison, Mark M; Bryant, J Tim; Sellens, Rick W; Ploeg, Heidi-Lynn

    2014-06-19

    The purpose of this study was to compare computed tomography density (?CT ) obtained using typical clinical computed tomography scan parameters to ash density (?ash ), for the prediction of densities of femoral head trabecular bone from hip fracture patients. An experimental study was conducted to investigate the relationships between ?ash and ?CT and between each of these densities and ?bulk and ?dry . Seven human femoral heads from hip fracture patients were computed tomography-scanned ex vivo, and 76 cylindrical trabecular bone specimens were collected. Computed tomography density was computed from computed tomography images by using a calibration Hounsfield units-based equation, whereas ?bulk, ?dry and ?ash were determined experimentally. A large variation was found in the mean Hounsfield units of the bone cores (HUcore) with a constant bias from ?CT to ?ash of 42.5 mg/cm(3). Computed tomography and ash densities were linearly correlated (R (2) = 0.55, p < 0.001). It was demonstrated that ?ash provided a good estimate of ?bulk (R (2) = 0.78, p < 0.001) and is a strong predictor of ?dry (R (2) = 0.99, p < 0.001). In addition, the ?CT was linearly related to ?bulk (R (2) = 0.43, p < 0.001) and ?dry (R (2) = 0.56, p < 0.001). In conclusion, mineral density was an appropriate predictor of ?bulk and ?dry , and ?CT was not a surrogate for ?ash . There were linear relationships between ?CT and physical densities; however, following the experimental protocols of this study to determine ?CT , considerable scatter was present in the ?CT relationships. PMID:24947202

  11. Inverse estimation of parameters for multidomain flow models in soil columns with different macropore densities

    PubMed Central

    Arora, Bhavna; Mohanty, Binayak P.; McGuire, Jennifer T.

    2013-01-01

    Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term“macropore density” is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165

  12. Pattern recognition algorithms for density estimation of asphalt pavement during compaction: a simulation study

    NASA Astrophysics Data System (ADS)

    Shangguan, Pengcheng; Al-Qadi, Imad L.; Lahouar, Samer

    2014-08-01

    This paper presents the application of artificial neural network (ANN) based pattern recognition to extract the density information of asphalt pavement from simulated ground penetrating radar (GPR) signals. This study is part of research efforts into the application of GPR to monitor asphalt pavement density during compaction. The main challenge is to eliminate the effect of roller-sprayed water on GPR signals during compaction and to extract density information accurately. A calibration of the excitation function was conducted to provide an accurate match between the simulated signal and the real signal. A modified electromagnetic mixing model was then used to calculate the dielectric constant of asphalt mixture with water. A large database of GPR responses was generated from pavement models having different air void contents and various surface moisture contents using finite-difference time-domain simulation. Feature extraction was performed to extract density-related features from the simulated GPR responses. Air void contents were divided into five classes representing different compaction statuses. An ANN-based pattern recognition system was trained using the extracted features as inputs and air void content classes as target outputs. Accuracy of the system was tested using test data set. Classification of air void contents using the developed algorithm is found to be highly accurate, which indicates effectiveness of this method to predict asphalt concrete density.

  13. An estimate of the electron density in filaments of galaxies at z˜ 0.1

    NASA Astrophysics Data System (ADS)

    Fraser-McKelvie, Amelia; Pimbblet, Kevin A.; Lazendic, Jasmina S.

    2011-08-01

    Most of the baryons in the Universe are thought to be contained within filaments of galaxies, but as yet, no single study has published the observed properties of a large sample of known filaments to determine typical physical characteristics such as temperature and electron density. This paper presents a comprehensive large-scale search conducted for X-ray emission from a population of 41 bona fide filaments of galaxies to determine their X-ray flux and electron density. The sample is generated from the filament catalogue of Pimbblet et al., which is in turn sourced from the two-degree Field Galaxy Redshift Survey (2dFGRS). Since the filaments are expected to be very faint and of very low density, we used stacked ROSAT All-Sky Survey data. We detect a net surface brightness from our sample of filaments of (1.6 ± 0.1) × 10-14 erg cm-2 s-1 arcmin-2 in the 0.9-1.3 keV energy band for 1-keV plasma, which implies an electron density of ne= (4.7 ± 0.2) × 10-4 h1/2100 cm-3. Finally, we examine if a filament’s membership to a supercluster leads to an enhanced electron density as reported by Kull & Böhringer. We suggest it remains unclear if supercluster membership causes such an enhancement.

  14. Accuracy of catch-effort methods for estimating fish density and biomass in streams

    Microsoft Academic Search

    Robin Mahon

    1980-01-01

    At each of 11 localities a section of stream was closed off with nets and an electrofisher used to estimate the abundance of fishes in the section. Each section was fished from 5–7 times with each fishing equalling one unit of effort. Using the catch-effort methods of Leslie, DeLury and Ricker, separate estimates were made for each species. In several

  15. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    SciTech Connect

    Dafflon, Baptisite; Barrash, Warren; Cardiff, Michael A.; Johnson, Timothy C.

    2011-12-15

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variabledensity transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  16. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David B.

    2012-01-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

  17. Three-dimensional prostate position estimation with a single x-ray imager utilizing the spatial probability density

    NASA Astrophysics Data System (ADS)

    Rugaard Poulsen, Per; Cho, Byungchul; Langen, Katja; Kupelian, Patrick; Keall, Paul J.

    2008-08-01

    In radiotherapy, target motion during treatment delivery can be managed either by motion inclusive margins or by gating or tracking based on intrafraction target position monitoring. If radio-opaque fiducial markers are used the required three-dimensional (3D) target position signal for gating or tracking can be obtained by simultaneous acquisition of two x-ray images from different angles. However, most treatment machines do not have such stereoscopic imaging capability. Alternatively, the 3D target position may be estimated with a single imager (monoscopic imaging) although it only provides the projected target position in the two dimensions of the imager plane. In this study, we developed a probability-based method to estimate the unresolved motion component parallel to the imager axis from the projected motion. A 3D Gaussian probability density was assumed for the target position. Projection of the target into a certain point on the imager means that it is located on the ray line that connects this point with the focus point of the x-ray source. The 1D probability density along this line was calculated from the 3D probability density and its expectation value was used as the estimate for the unresolved position. The mathematical framework of the method was developed including analytical expressions for the estimated unresolved component as a function of resolved components and for the estimation uncertainty. Use of the method was demonstrated for prostate in a simulation study of monoscopic imaging. First, the required 3D probability density was constructed as a population average from a data set consisting of 536 continuous prostate position tracks from 17 patients recorded at 10 Hz. Next, monoscopic imaging at a fixed imaging angle and imaging frequency was simulated for each prostate track. Estimated 3D prostate tracks were constructed from the simulated projection images by the proposed method and compared with the actual tracks in order to determine the root-mean-square (rms) error. The simulations were performed with imaging angles in the range from 0° to 180° (relative to vertical) and imaging frequencies in the range from 0.1 s (corresponding to continuous imaging) to 600 s (corresponding to no intrafraction imaging). For comparison, simulations were also performed with stereoscopic imaging, where perfect position determination in all three directions was assumed, and with monoscopic imaging without estimation of the unresolved motion, where the motion component along the imager axis was assumed to be zero. For continuous imaging, the accuracy of monoscopic imaging was limited by the uncertainty in the unresolved position estimation. The resulting vector rms error for the population corresponded closely to the theoretically derived estimation uncertainty. The estimation did not improve the accuracy of lateral monoscopic imaging, but it reduced the population rms error from 1.59 mm to 1.11 mm for vertical imaging. This improvement was most prominent for outlying tracks with large unresolved motion. Stereoscopic imaging was clearly superior to monoscopic imaging for high frequency imaging. For less frequent imaging, the accuracy of both monoscopic and stereoscopic imaging decreased due to target motion between images. Since this was most prominent for stereoscopic imaging, the difference in accuracy between monoscopic and stereoscopic imaging decreased with increasing imaging period. In conclusion, a method for estimation of the 3D target position from 2D projections has been developed and its use has been demonstrated in a simulation study of monoscopic prostate tracking.

  18. Estimation of boron intake and its relation with bone mineral density in free-living Korean female subjects.

    PubMed

    Kim, Mi-Hyun; Bae, Yun-Jung; Lee, Yoon-Shin; Choi, Mi-Kyeong

    2008-12-01

    In this study, the status of boron intake was evaluated and its relation with bone mineral density was examined among free-living female subjects in Korea. Boron intake was estimated through the use of the database of boron content in frequently consumed foods by Korean people as well as measuring bone mineral density, taking anthropometric measurements, and surveying dietary intake of 134 adult females in order to relatively evaluate the intake of boron as a nutrient to supplement the low level of calcium intake and to verify its relationship with bone mineral density. Average age, height, and weight of the subjects were respectively 40.84 years, 157.62 cm and 59.70 kg. Also, average bone mineral density of lumbar spine L1-L4 and average bone mineral density of the femoral neck were 0.92 g/cm(2) and 0.80 g/cm(2), respectively. Their average intakes of energy and boron per day were 6,538.53 kJ and 926.94 microg. Intake of boron through vegetables, fruits, and cereals accounted for 61.72% of the overall boron intake. The food item that contributed most to their daily boron intake was rice. Also, 65.41% of overall boron intake was from 30 varieties of other food items, such as soybean paste, soybeans, red beans, watermelons, oriental melons, pears, Chinese cabbage Kimchi, soybean sprouts and soybean milk, etc. Boron intake did not show significant relation to bone mineral density in lumbar vertebra and femoral region. In summary, the average daily intake of boron was 926.94 microg and did not display significant relation to bone mineral density in 134 free-living female subjects. The continuous evaluation of boron consumption by more diverse targets will need to be conducted in the future. PMID:18575817

  19. The implementation of binned Kernel density estimation to determine open clusters' proper motions: validation of the method

    NASA Astrophysics Data System (ADS)

    Priyatikanto, R.; Arifyanto, M. I.

    2015-01-01

    Stellar membership determination of an open cluster is an important process to do before further analysis. Basically, there are two classes of membership determination method: parametric and non-parametric. In this study, an alternative of non-parametric method based on Binned Kernel Density Estimation that accounts measurements errors (simply called BKDE- e) is proposed. This method is applied upon proper motions data to determine cluster's membership kinematically and estimate the average proper motions of the cluster. Monte Carlo simulations show that the average proper motions determination using this proposed method is statistically more accurate than ordinary Kernel Density Estimator (KDE). By including measurement errors in the calculation, the mode location from the resulting density estimate is less sensitive to non-physical or stochastic fluctuation as compared to ordinary KDE that excludes measurement errors. For the typical mean measurement error of 7 mas/yr, BKDE- e suppresses the potential of miscalculation by a factor of two compared to KDE. With median accuracy of about 93 %, BKDE- e method has comparable accuracy with respect to parametric method (modified Sanders algorithm). Application to real data from The Fourth USNO CCD Astrograph Catalog (UCAC4), especially to NGC 2682 is also performed. The mode of member stars distribution on Vector Point Diagram is located at ? ? cos ?=-9.94±0.85 mas/yr and ? ? =-4.92±0.88 mas/yr. Although the BKDE- e performance does not overtake parametric approach, it serves a new view of doing membership analysis, expandable to astrometric and photometric data or even in binary cluster search.

  20. Reliability of Bioelectrical Impedance Analysis for Estimating Whole-Fish Energy Density and Percent Lipids

    Microsoft Academic Search

    Steven A. Pothoven; Stuart A. Ludsin; Tomas O. Höök; David L. Fanslow; Doran M. Mason; Paris D. Collingsworth; Jason J. Van Tassell

    2008-01-01

    We evaluated bioelectrical impedance analysis (BIA) as a nonlethal means of predicting energy density and percent lipids for three fish species: Yellow perch Perca flavescens, walleye Sander vitreus, and lake whitefish Coregonus clupeaformis. Although models that combined BIA measures with fish wet mass provided strong predictions of total energy, total lipids, and total dry mass for whole fish, including BIA

  1. PELLET COUNT INDICES COMPARED TO MARK-RECAPTURE ESTIMATES FOR EVALUATING SNOWSHOE HARE DENSITY

    Microsoft Academic Search

    L. SCOTT MILLS; KAREN E. HODGES

    Snowshoe hares (Lepus americanus) undergo remarkable cycles and are the primary prey base of Canada lynx (Lynx canadensis), a carnivore recently listed as threatened in the contiguous United States. Efforts to evalu- ate hare densities using pellets have traditionally been based on regression equations developed in the Yukon, Canada. In western Montana, we evaluated whether or not local regression equations

  2. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...

  3. Estimating low-density snowshoe hare populations using fecal pellet counts

    E-print Network

    (1 m2 ) circular plots (metre-circle plots). Metre-circle plots had higher pellet prevalence, lower circular plots required less establishment time, and observer training re- duced the pellet-count bias than did metre-circle plots. The relationship between pellet density and hare number may have been

  4. An Evaluation of Linearly Combining Density Estimators via Technical Report No. 98--25,

    E-print Network

    Smyth, Padhraic

    and Computer Science University of California, Irvine CA 92697­3425 smyth@ics.uci.edu David Wolpert NASA Ames--25, Information and Computer Science Department, University of California, Irvine Padhraic Smyth 1 Information indicate a particular model, i.e., a particular mapping taking a parameter vector to a density. Let ` M

  5. New Method of Probability Density Estimation with Application to Mutual Information Based Image Registration

    Microsoft Academic Search

    Ajit Rajwade; Arunava Banerjee; Anand Rangarajan

    2006-01-01

    We present a new, robust and computationally efficient methodforestimatingthe probabilitydensity of the intensity values in an image. Our approach makes use of a continu- ous representation of the image and develops a relation be- tween probability density at a particular intensity value and image gradients along the level sets at that value. Unlike traditional sample-based methods such as histograms, min-

  6. Task-Oriented Comparison of Power Spectral Density Estimation Methods for Quantifying Acoustic Attenuation in Diagnostic Ultrasound Using a Reference Phantom Method

    PubMed Central

    Rosado-Mendez, Ivan M.; Nam, Kibo; Hall, Timothy J.; Zagzebski, James A.

    2013-01-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing ?(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law ?(f)=?0f?, was estimated using a reference phantom method. The power spectral density as estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter estimation region. Errors were quantified by the bias and standard deviation of the ?0 and ? estimates, and by the overall power-law fit error. For parameter estimation regions larger than ~34 pulse lengths (~1cm for this experiment), an overall power-law fit error of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the ?0 and ? estimates depended on the size of the parameter estimation region. Here the multitaper method reduced the standard deviation of the ?0 and ? estimates compared to those using the other techniques. Results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound. PMID:23858055

  7. Primate diversity, habitat preferences, and population density estimates in Noel Kempff Mercado National Park, Santa Cruz Department, Bolivia.

    PubMed

    Wallace, R B; Painter, R L; Taber, A B

    1998-01-01

    This report documents primate communities at two sites within Noel Kempff Mercado National Park in northeastern Santa Cruz Department, Bolivia. Diurnal line transects and incidental observations were employed to survey two field sites, Lago Caiman and Las Gamas, providing information on primate diversity, habitat preferences, relative abundance, and population density. Primate diversity at both sites was not particularly high, with six observed species: Callithrix argentata melanura, Aotus azarae, Cebus apella, Alouatta caraya, A. seniculus, and Ateles paniscus chamek. Cebus showed no significant habitat preferences at Lago Caiman and was also more generalist in use of forest strata, whereas Ateles clearly preferred the upper levels of structurally tall forest. Callithrix argentata melanura was rarely encountered during surveys at Lago Caiman, where it preferred low vine forest. Both species of Alouatta showed restricted habitat use and were sympatric in Igapo forest in the Lago Caiman area. The most abundant primate at both field sites was Ateles, with density estimates reaching 32.1 individuals/km2 in the lowland forest at Lago Caiman, compared to 14.1 individuals/km2 for Cebus. Both Ateles and Cebus were absent from smaller patches of gallery forest at Las Gamas. These densities are compared with estimates from other Neotropical sites. The diversity of habitats and their different floristic composition may account for the numerical dominance of Ateles within the primate communities at both sites. PMID:9802511

  8. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  9. A Weighted k-Nearest Neighbor Density Estimate for Geometric Inference

    E-print Network

    Biau, Gérard

    The problem of recovering topological and geometric information from mul- tivariate data has attracted . In this stochastic framework, the problem of estimating the sup- port of µ and its geometric properties (e ball of small radius, around data points inside the sup- port set (Devroye and Wise [19]). Object

  10. Techniques and Technology Article Road-Based Surveys for Estimating Wild Turkey Density

    E-print Network

    Butler, Matthew J.

    (Meleagris gallopavo). We used inflatable turkey decoys during autumn (Aug­Nov) and winter (Dec­Mar) 2003, detectability, distance sampling, line transect, Meleagris gallopavo intermedia, simulation, Texas, trends, wild on line- transect­based distance sampling from roads for estimating wild turkey (Meleagris gallopavo

  11. Individual movements and population density estimates for moray eels on a Caribbean coral reef

    Microsoft Academic Search

    R. W. Abrams; M. W. Schein

    1986-01-01

    Observations of moray eel (Muraenidae) distribution made on a Caribbean coral reef are discussed in the context of long term population trends. Observations of eel distribution made using SCUBA during 1978, 1979–1980, and 1984 are compared and related to the occurrence of a hurricane in 1979. An estimate of the mean standing stock of moray eels is presented. The degree

  12. Density and Hazard Rate Estimation for Right Censored Data Using Wavelet Methods

    Microsoft Academic Search

    Anestis Antoniadis; Gérard Grégoire; Guy Nason

    1997-01-01

    This paper describes a wavelet method for the estimation ofdensity and hazard rate functions from randomly right censored data.We adopt a nonparametric approach in assuming that the densityand hazard rate have no specific parametric form. The method isbased on dividing the time axis into a dyadic number of intervalsand then counting the number of events within each interval. Thenumber of

  13. Density estimates of the domestic vector of Chagas disease, Rhodnius prolixus Stål (Hemiptera: Reduviidae), in rural houses in Venezuela.

    PubMed Central

    Rabinovich, J. E.; Gürtler, R. E.; Leal, J. A.; Feliciangeli, D.

    1995-01-01

    We reported the use of the timed manual method, routinely employed as an indicator to the relative abundance of domestic triatomine bugs, to estimate their absolute density in houses. A team of six people collected Rhodnius prolixus Stål bugs from the walls and roofs of 14 typical palm-leaf rural houses located in Cojedes, Venezuela, spending 40 minutes searching in each house. One day after these manual collections, all the houses were demolished and the number of triatomine bugs were identified by instar and counted. Linear regression analyses of the number of R. prolixus collected over 4 man-hours and the census counts obtained by house demolition indicated that the fit of the data by instar (stage II--adult) and place of capture (roof versus palm walls versus mud walls) was satisfactory. The slopes of the regressions were interpreted as a measure of "catchability" (probability of capture). Catchability increased with developmental stage (ranging from 11.2% in stage II to 38.7% in adults), probably reflecting the increasing size and visibility of bugs as they evolved. The catchability on palm wall was higher than that for roofs or mud walls, increasing form 1.3% and 3.0% in stage II to 13.4% and 14.0% in adults, respectively. We reported, also, regression equations for converting field estimates of timed manual collections of R. prolixus into absolute density estimates. Images Fig. 1 PMID:7614667

  14. Density estimates of the domestic vector of Chagas disease, Rhodnius prolixus Stål (Hemiptera: Reduviidae), in rural houses in Venezuela.

    PubMed

    Rabinovich, J E; Gürtler, R E; Leal, J A; Feliciangeli, D

    1995-01-01

    We reported the use of the timed manual method, routinely employed as an indicator to the relative abundance of domestic triatomine bugs, to estimate their absolute density in houses. A team of six people collected Rhodnius prolixus Stål bugs from the walls and roofs of 14 typical palm-leaf rural houses located in Cojedes, Venezuela, spending 40 minutes searching in each house. One day after these manual collections, all the houses were demolished and the number of triatomine bugs were identified by instar and counted. Linear regression analyses of the number of R. prolixus collected over 4 man-hours and the census counts obtained by house demolition indicated that the fit of the data by instar (stage II--adult) and place of capture (roof versus palm walls versus mud walls) was satisfactory. The slopes of the regressions were interpreted as a measure of "catchability" (probability of capture). Catchability increased with developmental stage (ranging from 11.2% in stage II to 38.7% in adults), probably reflecting the increasing size and visibility of bugs as they evolved. The catchability on palm wall was higher than that for roofs or mud walls, increasing form 1.3% and 3.0% in stage II to 13.4% and 14.0% in adults, respectively. We reported, also, regression equations for converting field estimates of timed manual collections of R. prolixus into absolute density estimates. PMID:7614667

  15. Estimates of maximum energy density of cosmological gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Giblin, John T.; Thrane, Eric

    2014-11-01

    The recent claim by BICEP2 of evidence for primordial gravitational waves has focused interest on the potential for early-Universe cosmology using gravitational waves. In addition to cosmic microwave background detectors, efforts are underway to carry out gravitational-wave astronomy with pulsar timing arrays, space-based detectors, and terrestrial detectors. These efforts will probe a wide range of times in the early Universe, during which backgrounds may have been produced through processes such as phase transitions or preheating. We derive a rule of thumb (not so strong as an upper limit) governing the maximum energy density of cosmological backgrounds. For most scenarios, we expect the energy density spectrum to peak at values of ?gw(f )?1 0-12 ±2 . We discuss the applicability of this rule and the implications for gravitational-wave astronomy.

  16. Estimating the effective density of engineered nanomaterials for in vitro dosimetry

    NASA Astrophysics Data System (ADS)

    Deloid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

    2014-03-01

    The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro.

  17. ON THE IMPOSSIBILITY OF ESTIMATING DENSITIES IN THE EXTREME TAIL Jan Beirlant

    E-print Network

    Devroye, Luc

    and the total variation distance. In this paper we show that without such a domain of attraction condition considering the total variation distance Z jfn(x) \\Gamma gn(x)j dx; respectively the uniform metric sup x j differentiable density f such that inf n E aeZ jfn(x) \\Gamma gn(x)j dx oe â?? 1 49 : Thus, in the total variation

  18. Moment tests for density forecast evaluation in the presence of parameter estimation uncertainty

    Microsoft Academic Search

    2011-01-01

    Density forecast (DF) possesses appealing properties when it is correctly specified for the true conditional distribution. Although a number of parametric specification tests have been introduced for the DF evaluation (DFE) in the parameter-free context, econometric DF models are typically parameter?dependent. In this paper, we first use a generalized probability integral transformation?based moment test to unify these existing tests, and

  19. Comparison of volumetric breast density estimations from mammography and thorax CT

    NASA Astrophysics Data System (ADS)

    Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.

    2014-08-01

    Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.

  20. PELLET COUNT INDICES COMPARED TO MARK–RECAPTURE ESTIMATES FOR EVALUATING SNOWSHOE HARE DENSITY

    Microsoft Academic Search

    L. SCOTT MILLS; PAUL C. GRIFFIN; KAREN E. HODGES; KEVIN McKELVEY; LEN RUGGIERO; TODD ULIZIO

    2005-01-01

    Abstract: Snowshoe,hares (Lepus americanus) undergo remarkable,cycles and are the primary,prey base of Canada lynx (Lynx canadensis), a carnivore recently listed as threatened in the contiguous United States. Efforts to evalu- ate hare densities using pellets have traditionally been based on regression equations developed in the Yukon, Canada. In western Montana, we evaluated whether or not local regression equations performed better

  1. Estimates of lightning ground flash density inAustralia and its relationship to thunder-days

    Microsoft Academic Search

    Y. Kuleshov; E. R. Jayaratne

    2004-01-01

    A method of deriving lightning ground flash density from CIGRE lightning flash counter registrations based on the detection efficiency of the instrument, independent of the lati - tudinal variation of cloud flash-to-ground flash ratio is pre - sented. Using this method, the annual mean ground flash den - sities,N g,overaperiodofupto22yearswererecalculatedfrom the counter registrations for 17 selected Australian sites. The results

  2. Breast segmentation and density estimation in breast MRI: a fully automatic framework.

    PubMed

    Gubern-Mérida, Albert; Kallenberg, Michiel; Mann, Ritse M; Martí, Robert; Karssemeijer, Nico

    2015-01-01

    Breast density measurement is an important aspect in breast cancer diagnosis as dense tissue has been related to the risk of breast cancer development. The purpose of this study is to develop a method to automatically compute breast density in breast MRI. The framework is a combination of image processing techniques to segment breast and fibroglandular tissue. Intra- and interpatient signal intensity variability is initially corrected. The breast is segmented by automatically detecting body-breast and air-breast surfaces. Subsequently, fibroglandular tissue is segmented in the breast area using expectation-maximization. A dataset of 50 cases with manual segmentations was used for evaluation. Dice similarity coefficient (DSC), total overlap, false negative fraction (FNF), and false positive fraction (FPF) are used to report similarity between automatic and manual segmentations. For breast segmentation, the proposed approach obtained DSC, total overlap, FNF, and FPF values of 0.94, 0.96, 0.04, and 0.07, respectively. For fibroglandular tissue segmentation, we obtained DSC, total overlap, FNF, and FPF values of 0.80, 0.85, 0.15, and 0.22, respectively. The method is relevant for researchers investigating breast density as a risk factor for breast cancer and all the described steps can be also applied in computer aided diagnosis systems. PMID:25561456

  3. A New Estimate of the Star Formation Rate Density in the HDFN

    Microsoft Academic Search

    M. Massarotti; A. Iovino

    2001-01-01

    We measured the evolution of SFRD in the HDFN by comparing the available multi-color information on galaxy SEDs with a library\\u000a of model fluxes, provided by the codes of Bruzual & Charlot (1993, ApJ 405, 538) and Leitherer et al. (1999, ApJS 123, 3).\\u000a For each HDFN galaxy the best fitting template was used to estimate the redshift, the amount

  4. Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil

    PubMed Central

    Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz

    2012-01-01

    After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23?673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547

  5. Estimator

    NSDL National Science Digital Library

    Practice estimation skills by determining the number of objects, the length of a line, or the area of a shape. Parameters: error tolerance of estimate. Estimator is one of the Interactivate assessment explorers.

  6. H2/HD molecular clouds in the early universe. An independent means of estimating the baryon density of the universe

    NASA Astrophysics Data System (ADS)

    Ivanchik, A. V.; Balashev, S. A.; Varshalovich, D. A.; Klimenko, V. V.

    2015-02-01

    Areview ofmolecular hydrogen H2 absorption systems identified in quasar spectra is presented. The analysis of such systems allows the determination of the chemical composition of the interstellar medium and the physical conditions existing in the early Universe, about 10-12 billioin years ago. To date, 27 molecular hydrogen systems have been found, nine of which show HD lines. An independent method for estimating the baryon density of the Universe is described, and is based on the analysis of the relative abundances of H2 and HD molecules. Among known H2/HD systems, only the two systems detected in Q1232+082 and Q0812+320 quasar spectra satisfy the condition of self-shielding of the absorbing cloud log . Under these conditions the local molecular fraction can reach unity, making it possible to estimate the relative deuterium abundance D/H using the ratio of the HD and H2 column densities N(HD) /2 N(H2). The analysis of the column densities for these two systems yields D/H = HD/2H2 = (3.26 ± 0.29) × 10-5. Comparison of this result with the prediction of BBN theory for D/H enables the determination of the baryon density of the Universe: ?b h 2 = (0.0194 ± 0.0011). This is somewhat lower than the values ?b h 2 = (0.0224 ± 0.0012) and (0.0221 ± 0.0003) obtained using other independent methods: (i) analysis of the relative D and H abundances in Lyman Limit Systems at high redshifts, and (ii) analysis of the anisotropy of the cosmic microwave background. Nevertheless, all three values agree within their 2 ? errors.

  7. Acculturation and ethnic variations in breast cancer risk factors, gail model risk estimates and mammographic breast density.

    PubMed

    Tehranifar, P; Protacio, A; Akinyemiju, T F; Schmitt, K; Desperito, E; Terry, M B

    2015-04-01

    Breast cancer (BC) incidence varies across countries and across US ethnic groups. US Immigrants often exhibit an intermediate level of risk between those observed in their birth country and in the US. This transition of risk may partly be explained by uptake of risk factors associated with acculturation. Investigating whether immigration and acculturation risk patterns are similarly reflected in disease biomarkers can provide insight into mechanisms underlying the transition of risk. We examined differences in the distribution of BC risk factors, absolute risk estimates and mammographic density by ethnicity and acculturation. We used data from 366 women recruited from an urban mammography clinic (ages 40-64 years) to compare BC risk factors and Gail model risk estimates across US-born white, US-born African American [AA], US-born Hispanic and foreign-born Hispanic women. We used linear regression models to examine the associations of immigration and acculturation indicators (e.g., generational status, age and life stage at immigration, language use) with percent density and dense breast area, measured from mammograms. Differences in BC risk factors were mostly observed for ethnic groups, with white women having higher reproductive and lifestyle risk profiles (e.g., lower parity, older age at first birth, higher alcohol intake), Hispanics having shorter height and AAs having larger body mass index (BMI) and waist circumference. The average lifetime and 5-year Gail estimates were highest in whites (11.4% & 1.4%), intermediate in AAs (7.2% & 1.0%) and lowest in Hispanics (6.9% & 0.7% in US-born and 6.6% & 0.8% in foreign-born). After adjusting for age, BMI and parity, lower linguistic acculturation, shorter residence in the US, and later age at immigration were associated with lower percent density (all p values for trend across acculturation levels <0.05); e.g., monolingual Spanish and bilingual speakers respectively had on average 5.6% (95% CI, -10.0--1.3) and 3.8% (95% CI, -8.1-0.4) lower percent density than monolingual English speakers. Similar but more modest associations were observed for dense area. The increase in BC risk after immigration to the US and subsequent acculturation may operate via influences on mammographic density in Hispanic women. PMID:25834154

  8. Evaluation of techniques for estimating the power spectral density of RR-intervals under paced respiration conditions.

    PubMed

    Schaffer, Thorsten; Hensel, Bernhard; Weigand, Christian; Schüttler, Jürgen; Jeleazcov, Christian

    2014-10-01

    Heart rate variability (HRV) analysis is increasingly used in anaesthesia and intensive care monitoring of spontaneous breathing and mechanical ventilated patients. In the frequency domain, different estimation methods of the power spectral density (PSD) of RR-intervals lead to different results. Therefore, we investigated the PSD estimates of fast Fourier transform (FFT), autoregressive modeling (AR) and Lomb-Scargle periodogram (LSP) for 25 young healthy subjects subjected to metronomic breathing. The optimum method for determination of HRV spectral parameters under paced respiration was identified by evaluating the relative error (RE) and the root mean square relative error (RMSRE) for each breathing frequency (BF) and spectral estimation method. Additionally, the sympathovagal balance was investigated by calculating the low frequency/high frequency (LF/HF) ratio. Above 7 breaths per minute, all methods showed a significant increase in LF/HF ratio with increasing BF. On average, the RMSRE of FFT was lower than for LSP and AR. Therefore, under paced respiration conditions, estimating RR-interval PSD using FFT is recommend. PMID:23508826

  9. Estimating the density scaling exponent of viscous liquids from specific heat and bulk modulus data

    E-print Network

    Ulf R. Pedersen; Tina Hecksher; Bo Jakobsen; Thomas B. Schrøder; Nicoletta Gnan; Nicholas P. Bailey; Jeppe C. Dyre

    2009-04-22

    It was recently shown by computer simulations that a large class of liquids exhibits strong correlations in their thermal fluctuations of virial and potential energy [Pedersen et al., Phys. Rev. Lett. 100, 015701 (2008)]. Among organic liquids the class of strongly correlating liquids includes van der Waals liquids, but excludes ionic and hydrogen-bonding liquids. The present note focuses on the density scaling of strongly correlating liquids, i.e., the fact their relaxation time tau at different densities rho and temperatures T collapses to a master curve according to the expression tau propto F(rho^gamma/T) [Schroder et al., arXiv:0803.2199]. We here show how to calculate the exponent gamma from bulk modulus and specific heat data, either measured as functions of frequency in the metastable liquid or extrapolated from the glass and liquid phases to a common temperature (close to the glass transition temperature). Thus an exponent defined from the response to highly nonlinear parameter changes may be determined from linear response measurements.

  10. Estimating the effective density of engineered nanomaterials for in vitro dosimetry

    PubMed Central

    DeLoid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Wang, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

    2014-01-01

    The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by bench-top centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174

  11. Estimating sap flux densities in date palm trees using the heat dissipation method and weighing lysimeters.

    PubMed

    Sperling, Or; Shapira, Or; Cohen, Shabtai; Tripler, Effi; Schwartz, Amnon; Lazarovitch, Naftali

    2012-09-01

    In a world of diminishing water reservoirs and a rising demand for food, the practice and development of water stress indicators and sensors are in rapid progress. The heat dissipation method, originally established by Granier, is herein applied and modified to enable sap flow measurements in date palm trees in the southern Arava desert of Israel. A long and tough sensor was constructed to withstand insertion into the date palm's hard exterior stem. This stem is wide and fibrous, surrounded by an even tougher external non-conducting layer of dead leaf bases. Furthermore, being a monocot species, water flow does not necessarily occur through the outer part of the palm's stem, as in most trees. Therefore, it is highly important to investigate the variations of the sap flux densities and determine the preferable location for sap flow sensing within the stem. Once installed into fully grown date palm trees stationed on weighing lysimeters, sap flow as measured by the modified sensors was compared with the actual transpiration. Sap flow was found to be well correlated with transpiration, especially when using a recent calibration equation rather than the original Granier equation. Furthermore, inducing the axial variability of the sap flux densities was found to be highly important for accurate assessments of transpiration by sap flow measurements. The sensors indicated no transpiration at night, a high increase of transpiration from 06:00 to 09:00, maximum transpiration at 12:00, followed by a moderate reduction until 08:00; when transpiration ceased. These results were reinforced by the lysimeters' output. Reduced sap flux densities were detected at the stem's mantle when compared with its center. These results were reinforced by mechanistic measurements of the stem's specific hydraulic conductivity. Variance on the vertical axis was also observed, indicating an accelerated flow towards the upper parts of the tree and raising a hypothesis concerning dehydrating mechanisms of the date palm tree. Finally, the sensors indicated reduction in flow almost immediately after irrigation of field-grown trees was withheld, at a time when no climatic or phenological conditions could have led to reduction in transpiration. PMID:22887479

  12. Non-convex model of the binary asteroid (809) Lundia and its density estimation

    NASA Astrophysics Data System (ADS)

    Kryszczynska, A.; Bartczak, P.; Polinska, M.; Colas, F.

    2014-07-01

    Introduction: (809) Lundia was classified as a V-type asteroid in the Flora family (Florczak et.al. 2002). The binary nature of (809) Lundia was discovered in September 2005 based on photometric observations. The first modeling of the Lundia synchronous binary system was based on 22 lightcurves obtained at Borowiec and Pic du Midi Observatories during two oppositions in 2005/2006 and 2006/2007. Two methods of modeling --- modified Roche ellipsoids and kinematic --- gave similar parameters for the system (Kryszczynska et al. 2009). The poles of the orbit in ecliptic coordinates were: longitude 118° and latitude 28° in the modified Roche model and 120°, 18°, respectively, in the kinematic model. The orbital period obtained from the lightcurve analysis as well as from modeling was 15.418 h. The obtained bulk density of both components was 1.64 or 1.71 g/ccm. Observations: We observed (809) Lundia in the 2008, 2009/2010, 2011, and 2012 oppositions at the Borowiec, Pic du Midi, Prompt, and Rozhen Observatories. As predicted, the visible eclipses/occultation events were observed only in 2011. Currently, our dataset consists of 45 individual lightcurves and they were all used in the new modeling. Method: We used new method of modeling based on a genetic algorithm that is able to create a non-convex asteroid shape model, rotational period, and spin-axis orientation of a single or binary asteroid, using only photometric observations. The details of the method are presented in the poster by Bartczak et al., at this conference. Results: The new non-convex model of (809) Lundia is presented in the figure. The parameters of the system in the ecliptic coordinates are: longitude 122°, latitude 22°, and sidereal period 15.41574 h. They are very similar to the values obtained before. However, assuming an equivalent diameter of a single body of 9.1 km from the Spitzer observations (Marchis et al. 2012) and the volume of the two modeled bodies, the separation of the components is 17.2 km, the sizes of the components are 7.7 km and 6.7 km, and the size ratio is 0.87. The obtained density of 2.5 g/ccm is much higher than that determined before. In comparison to the density of HED meteorites, this value implies a macroporosity of only 13--23 %.

  13. Theoretical estimation of the electron affinity for quinone derivatives by means of density functional theory

    NASA Astrophysics Data System (ADS)

    Kalimullina, L. R.; Nafikova, E. P.; Asfandiarov, N. L.; Chizhov, Yu. V.; Baibulova, G. Sh.; Zhdanov, E. R.; Gadiev, R. M.

    2015-03-01

    A number of compounds related to quinone derivatives is investigated by means of density functional theory in the B3LYP/6-31G(d) mode. Vertical electron affinity E va and/or electron affinity E a for the investigated compounds are known from experiments. The correlation between the calculated energies of ?* molecular orbitals with the E va values measured via electron transmission spectroscopy is determined with a coefficient of 0.96. It is established that theoretical values of the adiabatic electron affinity, calculated as the difference between the total energies of a neutral molecule and a radical anion, correlate with E a values determined from electron transfer experiments with a correlation coefficient of 0.996.

  14. Electronegativity estimator built on QTAIM-based domains of the bond electron density.

    PubMed

    Ferro-Costas, David; Pérez-Juste, Ignacio; Mosquera, Ricardo A

    2014-05-15

    The electron localization function, natural localized molecular orbitals, and the quantum theory of atoms in molecules have been used all together to analyze the bond electron density (BED) distribution of different hydrogen-containing compounds through the definition of atomic contributions to the bonding regions. A function, gAH , obtained from those contributions is analyzed along the second and third periods of the periodic table. It exhibits periodic trends typically assigned to the electronegativity (?), and it is also sensitive to hybridization variations. This function also shows an interesting S shape with different ?-scales, Allred-Rochow's being the one exhibiting the best monotonical increase with regard to the BED taken by each atom of the bond. Therefore, we think this ? can be actually related to the BED distribution. PMID:24610731

  15. Simulation study of geometric shape factor approach to estimating earth emitted flux densities from wide field-of-view radiation measurements

    NASA Technical Reports Server (NTRS)

    Weaver, W. L.; Green, R. N.

    1980-01-01

    A study was performed on the use of geometric shape factors to estimate earth-emitted flux densities from radiation measurements with wide field-of-view flat-plate radiometers on satellites. Sets of simulated irradiance measurements were computed for unrestricted and restricted field-of-view detectors. In these simulations, the earth radiation field was modeled using data from Nimbus 2 and 3. Geometric shape factors were derived and applied to these data to estimate flux densities on global and zonal scales. For measurements at a satellite altitude of 600 km, estimates of zonal flux density were in error 1.0 to 1.2%, and global flux density errors were less than 0.2%. Estimates with unrestricted field-of-view detectors were about the same for Lambertian and non-Lambertian radiation models, but were affected by satellite altitude. The opposite was found for the restricted field-of-view detectors.

  16. Estimation of Heavy Ion Densities From Linearly Polarized EMIC Waves At Earth

    SciTech Connect

    Kim, Eun-Hwa; Johnson, Jay R.; Lee, Dong-Hun

    2014-02-24

    Linearly polarized EMIC waves are expected to concentrate at the location where their wave frequency satisfies the ion-ion hybrid (IIH) resonance condition as the result of a mode conversion process. In this letter, we evaluate absorption coefficients at the IIH resonance in the Earth geosynchronous orbit for variable concentrations of helium and azimuthal and field-aligned wave numbers in dipole magnetic field. Although wave absorption occurs for a wide range of heavy ion concentration, it only occurs for a limited range of azimuthal and field-aligned wave numbers such that the IIH resonance frequency is close to, but not exactly the same as the crossover frequency. Our results suggest that, at L = 6.6, linearly polarized EMIC waves can be generated via mode conversion from the compressional waves near the crossover frequency. Consequently, the heavy ion concentration ratio can be estimated from observations of externally generated EMIC waves that have polarization.

  17. Estimation of underground fracture density of granitic rocks body using P-wave velocity tomography and crack tensor theory

    NASA Astrophysics Data System (ADS)

    Takemura, T.; Takahashi, M.; Tsukamoto, H.

    2008-12-01

    A crystalline rock such as granitic rocks is one of a candidate for deep underground excavations for disposal of high-level radioactive waste. Such a rock is fractured medium, and such fractures cause alternation of the mechanical properties such as strength and deformability, and the hydrological properties such as permeability and diffusion coefficient since the fractures function as a water-pathway. Therefore, the fractures are a very important parameter, when we determine hydro-mechanical properties of the crystalline rock at great depth. When the underground water flow is estimated by numerical simulation, distribution of the underground permeability is an important initial value to make hydro-geological model. However, it is difficult to obtain the underground permeability. Underground permeability is able to represent permeability tensor Pij (Oda, 1987), and Pij is function of the geometry of cracks such as density, aperture and connectivity, therefore, we are able to estimate underground permeability using the geometry of cracks. A non-dimensional second rank tensor Fij, called the crack tensor, has successfully been introduced to deal with crack geometry. We seek the possibility of using the longitudinal wave velocities to overcome the difficulty associated with the determination of crack tensors. A new second-rank tensor Vij is introduced by takemura and oda (2005), such that the longitudinal wave velocity is represented in terms of the tensor, and the crack tensor Fij is then given as a function of Vij. In this study, we try to estimate two-dimensional underground permeability using the distribution of underground longitudinal wave velocity, which is tomography image of the longitudinal wave, Vij and permeability tensor based on crack tensor theory. In addition, the aperture of fractures under confining pressure needs to estimate permeability tensor, and we propose experimental formula to estimate the aperture of fracture in underground. u.ac.jp/geotec

  18. Effective Dysphonia Detection Using Feature Dimension Reduction and Kernel Density Estimation for Patients with Parkinson’s Disease

    PubMed Central

    Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

    2014-01-01

    Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson’s disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher’s linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

  19. On the estimation of spectral density of X-ray sources using attenuation measurements

    NASA Astrophysics Data System (ADS)

    Huerta, Claudia; Vazquez, Luis; Manciu, Marian; Vulcan, Teodor; Manciu, Felicia; Waggener, Robert

    2008-03-01

    The high energy X-rays typically used in Medical Physics (100s kV -20 MeV ) have such short wavelengths, that creating a diffraction grating is impossible. Because the absorption coefficient depends on the wavelength, one can use transmission data through filters of various thicknesses to obtain information about the spectral density of the X-ray source. Neglecting non-linear processes, there is a linear dependence of the transmission data on the spectral distribution. Unfortunately, the corresponding underdetermined system is ill-conditioned, and traditional methods used to solve inverse problems (such as Singular Valued Decomposition) typically fails, even for very small levels of noise affecting the attenuation data (much lower than is typically obtained in an experiment). We will present a very robust algorithm for detecting the bremsstrahlung spectrum, which seek for a smooth function that minimizes the distance to the experimental transmission data. We will show that the algorithm works very well even for very noisy attenuation data, even when no prior knowledge of spectral distribution is available.

  20. A Comparative Study of Density Field Estimation for Galaxies: New Insights into the Evolution of Galaxies with Environment in COSMOS out to z~3

    E-print Network

    Darvish, Behnam; Sobral, David; Scoville, Nicholas; Aragon-Calvo, Miguel

    2015-01-01

    It is well-known that galaxy environment has a fundamental effect in shaping its properties. We study the environmental effects on galaxy evolution, with an emphasis on the environment defined as the local number density of galaxies. The density field is estimated with different estimators (weighted adaptive kernel smoothing, 10$^{th}$ and 5$^{th}$ nearest neighbors, Voronoi and Delaunay tessellation) for a K$_{s}$10$^{11}$M$_{\\odot}$) star-forming galaxies have not significantly changed since z$\\sim$3, regardless of their environment. However, for massive quiescent systems at lower redshifts (z$\\lesssim$1.3), we find a significant evolution in the number and stellar mass densities in denser environments compared to lower density regions. Our results suggest that the relation between stellar mass and local density is more fundamental than the color-density relation and that environment plays a significant role in quenching star formation activity in galaxies at z$\\lesssim$1.

  1. Estimation of volume densities of hepatocytic peroxisomes in a model fish: catalase conventional immunofluorescence versus cytochemistry for electron microscopy.

    PubMed

    Madureira, Tânia Vieira; Lopes, Célia; Malhão, Fernanda; Rocha, Eduardo

    2015-02-01

    Accurately accessing changes in the intracellular volumes (or numbers) of peroxisomes within a cell can be a lengthy task, because unbiased estimations can be made only by studies conducted under transmission electron microscopy. Yet, such information is often required, namely for correlations with functional data. The optimization and applicability of a fast and new technical proceeding based on catalase immunofluorescence was implemented herein by using primary hepatocytes from brown trout (Salmo trutta f. fario), exposed during 96 h to two distinct treatments (0.1% ethanol and 50 µM of 17?-ethynylestradiol). The time and cost efficiency, together with the results obtained by stereological analyses, specifically directed to the volume densities of peroxisomes, and additionally of the nucleus in relation to the hepatocyte, were compared with the well-established 3,3'-diaminobenzidine cytochemistry for electron microscopy. With the immuno technique it was possible to correctly distinguish punctate peroxisomal profiles, allowing the selection of the marked organelles for quantification. By both methodologies, a significant reduction in the volume density of the peroxisome within the hepatocyte was obtained after an estrogenic input. The most interesting point here was that the volume density ratios were quite correlated between both techniques. Overall, the immunofluorescence protocol for catalase was evidently faster, cheaper and provided reliable quantitative data that discriminated in the same way the compared groups. After this validation study, we recommend the use of catalase immunofluorescence as the first option for rapid screening of changes of the amount of hepatocytic peroxisomes, using their volume density as an indicator. PMID:25431324

  2. Estimation

    NSDL National Science Digital Library

    AAA Math

    2007-12-12

    This site has explanatory lessons, interactive practice, and challenge games all dealing with estimation and rounding. Includes information, practice, and games on rounding to the nearest ten, hundred, thousand, ten thousand, and hundred thousand, rounding decimals to the nearest hundredth and tenth, front end estimation with sums and differences, estimating sums and differences, and subtraction using estimation. All math problems are randomly created and students receive immediate feedback with the correct response. The bottom of each lesson page contains timed exercises.

  3. Mobile sailing robot for automatic estimation of fish density and monitoring water quality

    PubMed Central

    2013-01-01

    Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984

  4. GEOPHYSICAL RESEARCH LETTERS, VOL. 28, NO. 13, PAGES 2541-2544, JULY 1, 2001 Seismological in situ Estimation of Density Jumps across

    E-print Network

    Kawakatsu, Hitoshi

    discontinuities in the Earth's mantle. We estimate SS re- flection and PS conversion coefficients from Sc [Kato et al., 2001]. ScS reverberation phases sample mantle discontinuities with redundancy Estimation of Density Jumps across the Transition Zone Discontinuities beneath Japan Mamoru Kato 1

  5. Bone geometry, volumetric density, microarchitecture, and estimated bone strength assessed by HR-pQCT in Klinefelter syndrome.

    PubMed

    Shanbhogue, Vikram V; Hansen, Stinus; Jørgensen, Niklas Rye; Brixen, Kim; Gravholt, Claus H

    2014-11-01

    Although the expected skeletal manifestations of testosterone deficiency in Klinefelter's syndrome (KS) are osteopenia and osteoporosis, the structural basis for this is unclear. The aim of this study was to assess bone geometry, volumetric bone mineral density (vBMD), microarchitecture, and estimated bone strength using high-resolution peripheral quantitative computed tomography (HR-pQCT) in patients with KS. Thirty-one patients with KS confirmed by lymphocyte chromosome karyotyping aged 35.8?±?8.2 years were recruited consecutively from a KS outpatient clinic and matched with respect to age and height with 31 healthy subjects aged 35.9?±?8.2 years. Dual-energy X-ray absorptiometry (DXA) and HR-pQCT were performed in all participants, and blood samples were analyzed for hormonal status and bone biomarkers in KS patients. Twenty-one KS patients were on long-term testosterone-replacement therapy. In weight-adjusted models, HR-pQCT revealed a significantly lower cortical area (p?estimates of bone strength, whereas trabecular spacing was higher (p?=?0.03) at the tibia in KS patients. In addition, cortical thickness was significantly reduced, both at the radius and tibia (both p?estimated bone strength, or bone biomarkers in KS patients with and without testosterone therapy. This study showed that KS patients had lower total vBMD and a compromised trabecular compartment with a reduced trabecular density and bone volume fraction at the tibia. The compromised trabecular network integrity attributable to a lower trabecular number with relative preservation of trabecular thickness is similar to the picture found in women with aging. KS patients also displayed a reduced cortical area and thickness at the tibia, which in combination with the trabecular deficits, compromised estimated bone strength at this site. PMID:24806509

  6. Quantitative estimation of density variation in high-speed flows through inversion of the measured wavefront distortion

    NASA Astrophysics Data System (ADS)

    Medhi, Biswajit; Hegde, Gopalkrishna Mahadeva; Reddy, Kalidevapura Polareddy Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2014-12-01

    A simple method employing an optical probe is presented to measure density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a random dot pattern. Local slopes of the distorted wavefront are obtained from shifts of the dots in the pattern. Local shifts in the dots are accurately measured by cross-correlating local shifted shadows with the corresponding unshifted originals. The measured slopes are suitably unwrapped by using a discrete cosine transform based phase unwrapping procedure and also through iterative procedures. The unwrapped phase information is used in an iterative scheme for a full quantitative recovery of density distribution in the shock around the model through refraction tomographic inversion. Hypersonic flow field parameters around a missile shaped body at a free-stream Mach number of 5.8 measured using this technique are compared with the numerically estimated values.

  7. Estimate

    NSDL National Science Digital Library

    Mark Cogan

    2002-01-01

    This interactive applet helps develop number sense. The user mentally estimates a number that is represented by an arrow on a number line and then checks the estimate by clicking to have the exact number revealed. Users can choose a number range for whole numbers (between 0 and 10, 100, 1000 or 10,000) or decimals (tenths or hundredths between 0 and 1). An optional scale of tick marks provides guidance. The activity does not provide a scoring component.

  8. A wavelet-based Bayesian approach to regression models with long memory errors and its application to FMRI data.

    PubMed

    Jeong, Jaesik; Vannucci, Marina; Ko, Kyungduk

    2013-03-01

    This article considers linear regression models with long memory errors. These models have been proven useful for application in many areas, such as medical imaging, signal processing, and econometrics. Wavelets, being self-similar, have a strong connection to long memory data. Here we employ discrete wavelet transforms as whitening filters to simplify the dense variance-covariance matrix of the data. We then adopt a Bayesian approach for the estimation of the model parameters. Our inferential procedure uses exact wavelet coefficients variances and leads to accurate estimates of the model parameters. We explore performances on simulated data and present an application to an fMRI data set. In the application we produce posterior probability maps (PPMs) that aid interpretation by identifying voxels that are likely activated with a given confidence. PMID:23379536

  9. Plasma density estimation of a fusion grade ICP source through electrical parameters of the RF generator circuit

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, M.; Sudhir, Dass; Chakraborty, A.

    2015-03-01

    An inductively coupled plasma (ICP) based negative hydrogen ion source is chosen for ITER neutral beam (NB) systems. To avoid regular maintenance in a radioactive environment with high flux of 14 MeV neutrons and gamma rays, invasive plasma diagnostics like probes are not included in the ITER NB source design. While, optical or microwave based diagnostics which are normally used in other plasma sources, are to be avoided in the case of ITER sources due to the overall system design and interface issues. In such situation, alternative forms of assessment to characterize ion source plasma become a necessity. In the present situation, the beam current through the extraction system in the ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition and negative ion stripping. Apart from that, the ICP production region radio frequency (RF) driver region) is placed far (?30 cm) from the extraction region. Therefore, there are uncertainties involved in linking the beam current with plasma properties inside the RF driver. To maintain the optimum condition for source operation it is necessary to maintain the optimum conditions in the driver. A method of characterization of the plasma density in the driver without using any invasive or non-invasive probes could be a useful tool to achieve that objective. Such a method, which is exclusively for ICP based ion sources, is presented in this paper. In this technique, plasma density inside the RF driver is estimated through the measurements of the electrical parameters in the RF power supply circuit path. Monitoring RF driver plasma through the described route will be useful during the source commissioning phase and also in the beam operation phase.

  10. Estimator

    NSDL National Science Digital Library

    Shodor Education Foundation

    2004-01-01

    This activity features an applet that presents three estimating situations. Problems challenge students to estimate the number of objects on the screen or estimate the length of a curve or area of a shape given the scale. Students select one of the three problem types or allow the computer to select a random mix. Students also choose an easy or hard difficulty level and decide whether answers must be almost perfect, really close, or close. A hint button adds a grid to the screen. The applet tracks the student's correct answers and can display a history of guesses. From the applet page, What, How, and Why buttons open pages explaining the activity's purpose, function, how the mathematics fits into the curriculum with links to the NCTM geometry and measurement standards for grades 6-8. Supplemental resources include lesson plans, other estimation applets, background information about estimation, and exploration questions for investigating how a time limit affects making an estimation. Copyright 2005 Eisenhower National Clearinghouse

  11. SIZE AND DENSITY ESTIMATION FROM IMPACT TRACK MORPHOLOGY IN SILICA AEROGEL: APPLICATION TO DUST FROM COMET 81P/WILD 2

    SciTech Connect

    Niimi, Rei; Tsuchiyama, Akira [Department of Earth and Space Science, Osaka University, Toyonaka, Osaka 560-0043 (Japan); Kadono, Toshihiko [Institute of Laser Engineering, Osaka University, Suita, Osaka 565-0871 (Japan); Okudaira, Kyoko [Office for Planning and Management, The University of Aizu, Aizuwakamatsu, Fukushima 965-8580 (Japan); Hasegawa, Sunao; Tabata, Makoto [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, Sagamihara, Kanagawa 252-5210 (Japan); Watanabe, Takayuki; Yagishita, Masahito [Department of Environmental Chemistry and Engineering, Tokyo Institute of Technology, Nagatsuta, Yokohama 226-8502 (Japan); Machii, Nagisa; Nakamura, Akiko M. [Department of Earth and Planetary Sciences, Kobe University, Nada, Kobe 657-8501 (Japan); Uesugi, Kentaro; Takeuchi, Akihisa [Synchrotron Radiation Research Institute, SPring-8, Sayo, Hyogo 679-5198 (Japan); Nakano, Tsukasa, E-mail: kadonot@ile.osaka-u.ac.jp [Geological Survey of Japan, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8567 (Japan)

    2012-01-01

    A large number of cometary dust particles were captured with low-density silica aerogel during the NASA Stardust mission. The dust particles penetrated into the aerogel and formed various track shapes. To estimate the properties of the dust particles, such as density and size, based on the morphology of the tracks, we carried out systematic experiments testing impacts into low-density aerogel at 6 km s{sup -1} using projectiles of various sizes and densities. We found that the maximum track diameter and the ratio of the track length to the maximum track diameter in aerogel are good indicators of projectile size and density, respectively. Based on these results, we estimated the size and density of individual dust particles from comet 81P/Wild 2. The average density of the 'fluffy' dust particles and the bulk density of all dust particles were obtained as 0.35 {+-} 0.07 and 0.49 {+-} 0.18 g cm{sup -3}, respectively. These statistical data provided the content of monolithic and coarse grains in the Stardust particles, {approx}30 wt%. Combining this result with some mid-infrared observational data, we found that the content of crystalline silicates is {approx}50 wt% or more of non-volatile material.

  12. Model Assembly for Estimating Cell Surviving Fraction for Both Targeted and Nontargeted Effects Based on Microdosimetric Probability Densities

    PubMed Central

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641

  13. First electron density and temperature estimates from the Swarm Langmuir probes and a comparison with IS measurements

    NASA Astrophysics Data System (ADS)

    Buchert, Stephan C.; Eriksson, Anders; Gill, Reine; Nilsson, Thomas; Åhlen, Lennart; Wahlund, Jan-Erik; Knudsen, David; Burchill, Johnathan; Archer, William; Kouznetsov, Alexei; Stricker, Nico; Bouridah, Abderrazak; Bock, Ralph; Häggström, Ingemar; Rietveld, Michael; Gonzalez, Sixto; Aponte, Nestor

    2014-05-01

    The Langmuir Probes (LP) on the Swarm satellites are part of the Electric Field Instruments (EFI), which are featuring thermal ion imagers (TII) and so are measuring 3-d ion distributions. The main task of the Langmuir probes is to provide measurements of spacecraft potentials influencing the ions before they enter the TIIs. In addition also electron density (Ne) and temperature (Te) are estimated from EFI LP data. The design of the Swarm LP includes a standard current sampling under sweeps of the bias voltage, and also a novel ripple technique yielding derivatives of the current-voltage characteristics at three points in a rapid cycle. In normal mode the time resolution of the Ne and Te measurements so becomes only 0.5 s. We show first Ne and Te estimates from the EFI LPs obtained in the commissioning phase in December 2013, when all three satellites were following each other at about 500 km altitude at mutual distances of a few tens of kilometers. The LP data are compared with observations by incoherent scatter radars, namely EISCAT UHF, VHF, the ESR, and also Arecibo. Acknowledgements: The EFIs were developed and built by a consortium that includes COM DEV Canada, the University of Calgary, and the Swedish Institute for Space Physics in Uppsala. The Swarm EFI project is managed and funded by the European Space Agency with additional funding from the Canadian Space Agency. EISCAT is an international association supported by research organisations in China (CRIRP), Finland (SA), Japan (NIPR and STEL), Norway (NFR), Sweden (VR), and the United Kingdom (NERC). The Arecibo Observatory is operated by SRI International under a cooperative agreement with the National Science Foundation (AST-1100968), and in alliance with Ana G. Méndez-Universidad Metropolitana, and the Universities Space Research Association.

  14. Density estimates and nesting-site selection in chimpanzees of the Nimba Mountains, Côte d'Ivoire, and Guinea.

    PubMed

    Granier, Nicolas; Hambuckers, Alain; Matsuzawa, Tetsuro; Huynen, Marie-Claude

    2014-11-01

    We investigated nesting behavior of non habituated chimpanzees populating the Nimba Mountains to document their abundance and their criterions of nesting-site selection. During a 19-month study we walked 80 km of transects and recces each month, and recorded 764 nests (mean group size = 2.23 nests) along with characteristics of vegetation structure and composition, topography, and seasonality. Population density estimated with two nest count methods ranged between 0.14 and 0.65 chimpanzee/km(2) . These values are lower than previous estimates, emphasizing the necessity of protecting remaining wild ape populations. Chimpanzees built nests in 108 tree species out of 437 identified, but 2.3% of total species comprised 52% of nests. Despite they preferred nesting in trees of 25-29 cm DBH and at a mean height of 8.02 m, we recorded an important proportion of terrestrial nests (8.2%) that may reflect a cultural trait of Nimba chimpanzees. A logistic model of nest presence formulated as a function of 12 habitat variables revealed preference for gallery and mountain forests rather than lowland forest, and old-growth forest rather than secondary forests. They nested more frequently in the study area during the dry season (December-April). The highest probability of observing nests was at 770 m altitude, particularly in steep locations (mean ground declivity = 15.54%). Several of the reported nest characteristics combined with the existence of two geographically separated clusters of nest, suggest that the study area constitutes the non-overlapping peripheral areas of two distinct communities. This nest-based study led us to findings on the behavioral ecology of Nimba chimpanzees, which constitute crucial knowledge to implement efficient and purpose-built conservation. PMID:25099739

  15. Voltage-current characteristics of a high-power pulsed sputtering (HPPS) glow discharge and plasma density estimation

    NASA Astrophysics Data System (ADS)

    Yukimura, Ken; Mieda, Ryosuke; Azuma, Kingo; Tamagaki, Hiroshi; Okimoto, Tadao

    2009-05-01

    A droplet-free metallic plasma source is promising for enhanced adhesion of films with a smooth coating surface. This paper concerns the study of a highly ionized metallic plasma source using a pulsed Penning discharge designed with a magnetic field oriented parallel to an electric field. Such a plasma is called a high-power pulsed sputtering (HPPS) glow discharge plasma. This technology is related to so-called high-power impulse magnetron sputtering (HIPIMS), though the interaction of the magnetic and electric field in the HPPS glow plasma is different from the HIPIMS plasma. The titanium metallic species are sputtered by energetic argon ion bombardment, causing their ionization in as short as a few microsecond. The typical electrical characteristics are as follows: a peak current of 45 A (0.9 A/cm 2), a peak power of 18 kW (0.8 kW/cm 2), and an average power of 1 kW. The target voltage is approximately 400 V at 30 ?s for glow currents of 30-120 A. A negative pulse voltage is applied to the substrate holder electrode to extract ions from the magnetically confined HPPS glow plasma. Using the recovery characteristics of the voltage applied to the substrate, the ion density at the substrate surface is estimated to be on the order of 10 16-17 m -3 for a singly charged titanium plasma.

  16. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    NASA Astrophysics Data System (ADS)

    Goffin, Mark A.; Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C.; Smith, Paul N.; Smedley-Stevenson, Richard P.

    2015-01-01

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation.

  17. Simulation study of a geometric shape factor technique for estimating earth-emitted radiant flux densities from wide-field-of-view radiation measurements

    NASA Technical Reports Server (NTRS)

    Weaver, W. L.; Green, R. N.

    1980-01-01

    Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV.

  18. Estimated probability density functions for the times between flashes in the storms of 12 September 1975, 26 August 1975, and 13 July 1976

    NASA Technical Reports Server (NTRS)

    Tretter, S. A.

    1977-01-01

    A report is given to supplement the progress report of June 17, 1977. In that progress report gamma, lognormal, and Rayleigh probability density functions were fitted to the times between lightning flashes in the storms of 9/12/75, 8/26/75, and 7/13/76 by the maximum likelihood method. The goodness of fit is checked by the Kolmogoroff-Smirnoff test. Plots of the estimated densities along with normalized histograms are included to provide a visual check on the goodness of fit. The lognormal densities are the most peaked and have the highest tails. This results in the best fit to the normalized histogram in most cases. The Rayleigh densities have too broad and rounded peaks to give good fits. In addition, they have the lowest tails. The gamma densities fall inbetween and give the best fit in a few cases.

  19. Estimation and Error Analysis of Woody Canopy Leaf Area Density Profiles Using 3-D Airborne and Ground-Based Scanning Lidar Remote-Sensing Techniques

    Microsoft Academic Search

    Fumiki Hosoi; Yohei Nakai; Kenji Omasa

    2010-01-01

    Vertical profiles of the leaf area density (LAD) of a Japanese zelkova canopy were estimated by combining airborne and portable ground-based light detection and ranging (lidar) data and using a voxel-based canopy profiling method. The profiles obtained by the two types of lidars complemented each other, eliminating blind regions and yielding more accurate LAD profiles than could be obtained by

  20. Factors contributing to accuracy in the estimation of the woody canopy leaf area density profile using 3D portable lidar imaging

    Microsoft Academic Search

    Fumiki Hosoi; Kenji Omasa

    2007-01-01

    Factors that contribute to the accuracy of estimating woody canopy's leaf area density (LAD) using 3D portable lidar imaging were investigated. The 3D point cloud data for a Japanese zelkova canopy (Zelkova serrata (Thunberg) Makino) were collected using a por- table scanning lidar from several points established on the ground and at 10 m above the ground. The LAD profiles

  1. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, ?, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (?=0.79, p<0.001) and raw (?=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  2. A new wavelet-based approach for the automated treatment of large sets of lunar occultation data

    E-print Network

    O. Fors; A. Richichi; X. Otazu; J. Nunez

    2008-01-14

    The introduction of infrared arrays for lunar occultations (LO) work and the improvement of predictions based on new deep IR catalogues have resulted in a large increase in the number of observable occultations. We provide the means for an automated reduction of large sets of LO data. This frees the user from the tedious task of estimating first-guess parameters for the fit of each LO lightcurve. At the end of the process, ready-made plots and statistics enable the user to identify sources which appear to be resolved or binary and to initiate their detailed interactive analysis. The pipeline is tailored to array data, including the extraction of the lightcurves from FITS cubes. Because of its robustness and efficiency, the wavelet transform has been chosen to compute the initial guess of the parameters of the lightcurve fit. We illustrate and discuss our automatic reduction pipeline by analyzing a large volume of novel occultation data recorded at Calar Alto Observatory. The automated pipeline package is available from the authors.

  3. Gravity inversion using wavelet-based compression on parallel hybrid CPU/GPU systems: application to southwest Ghana

    NASA Astrophysics Data System (ADS)

    Martin, Roland; Monteiller, Vadim; Komatitsch, Dimitri; Perrouty, Stéphane; Jessell, Mark; Bonvalot, Sylvain; Lindsay, Mark

    2013-12-01

    We solve the 3-D gravity inverse problem using a massively parallel voxel (or finite element) implementation on a hybrid multi-CPU/multi-GPU (graphics processing units/GPUs) cluster. This allows us to obtain information on density distributions in heterogeneous media with an efficient computational time. In a new software package called TOMOFAST3D, the inversion is solved with an iterative least-square or a gradient technique, which minimizes a hybrid L1-/L2-norm-based misfit function. It is drastically accelerated using either Haar or fourth-order Daubechies wavelet compression operators, which are applied to the sensitivity matrix kernels involved in the misfit minimization. The compression process behaves like a pre-conditioning of the huge linear system to be solved and a reduction of two or three orders of magnitude of the computational time can be obtained for a given number of CPU processor cores. The memory storage required is also significantly reduced by a similar factor. Finally, we show how this CPU parallel inversion code can be accelerated further by a factor between 3.5 and 10 using GPU computing. Performance levels are given for an application to Ghana, and physical information obtained after 3-D inversion using a sensitivity matrix with around 5.37 trillion elements is discussed. Using compression the whole inversion process can last from a few minutes to less than an hour for a given number of processor cores instead of tens of hours for a similar number of processor cores when compression is not used.

  4. Wavelet-based resolution recovery using an anatomical prior provides quantitative recovery for human population phantom PET [11C]raclopride data

    NASA Astrophysics Data System (ADS)

    Shidahara, M.; Tsoumpas, C.; McGinnity, C. J.; Kato, T.; Tamura, H.; Hammers, A.; Watabe, H.; Turkheimer, F. E.

    2012-05-01

    The objective of this study was to evaluate a resolution recovery (RR) method using a variety of simulated human brain [11C]raclopride positron emission tomography (PET) images. Simulated datasets of 15 numerical human phantoms were processed by a wavelet-based RR method using an anatomical prior. The anatomical prior was in the form of a hybrid segmented atlas, which combined an atlas for anatomical labelling and a PET image for functional labelling of each anatomical structure. We applied RR to both 60 min static and dynamic PET images. Recovery was quantified in 84 regions, comparing the typical ‘true’ value for the simulation, as obtained in normal subjects, simulated and RR PET images. The radioactivity concentration in the white matter, striatum and other cortical regions was successfully recovered for the 60 min static image of all 15 human phantoms; the dependence of the solution on accurate anatomical information was demonstrated by the difficulty of the technique to retrieve the subthalamic nuclei due to mismatch between the two atlases used for data simulation and recovery. Structural and functional synergy for resolution recovery (SFS-RR) improved quantification in the caudate and putamen, the main regions of interest, from?-30.1% and?-26.2% to?-17.6% and?-15.1%, respectively, for the 60 min static image and from?-51.4% and?-38.3% to?-27.6% and?-20.3% for the binding potential (BPND) image, respectively. The proposed methodology proved effective in the RR of small structures from brain [11C]raclopride PET images. The improvement is consistent across the anatomical variability of a simulated population as long as accurate anatomical segmentations are provided.

  5. Local Wavelet-Based Filtering of Electromyographic Signals to Eliminate the Electrocardiographic-Induced Artifacts in Patients with Spinal Cord Injury

    PubMed Central

    Nitzken, Matthew; Bajaj, Nihit; Aslan, Sevda; Gimel’farb, Georgy; Ovechkin, Alexander

    2013-01-01

    Surface Electromyography (EMG) is a standard method used in clinical practice and research to assess motor function in order to help with the diagnosis of neuromuscular pathology in human and animal models. EMG recorded from trunk muscles involved in the activity of breathing can be used as a direct measure of respiratory motor function in patients with spinal cord injury (SCI) or other disorders associated with motor control deficits. However, EMG potentials recorded from these muscles are often contaminated with heart-induced electrocardiographic (ECG) signals. Elimination of these artifacts plays a critical role in the precise measure of the respiratory muscle electrical activity. This study was undertaken to find an optimal approach to eliminate the ECG artifacts from EMG recordings. Conventional global filtering can be used to decrease the ECG-induced artifact. However, this method can alter the EMG signal and changes physiologically relevant information. We hypothesize that, unlike global filtering, localized removal of ECG artifacts will not change the original EMG signals. We develop an approach to remove the ECG artifacts without altering the amplitude and frequency components of the EMG signal by using an externally recorded ECG signal as a mask to locate areas of the ECG spikes within EMG data. These segments containing ECG spikes were decomposed into 128 sub-wavelets by a custom-scaled Morlet Wavelet Transform. The ECG-related sub-wavelets at the ECG spike location were removed and a de-noised EMG signal was reconstructed. Validity of the proposed method was proven using mathematical simulated synthetic signals and EMG obtained from SCI patients. We compare the Root-mean Square Error and the Relative Change in Variance between this method, global, notch and adaptive filters. The results show that the localized wavelet-based filtering has the benefit of not introducing error in the native EMG signal and accurately removing ECG artifacts from EMG signals. PMID:24307920

  6. Wavelet-based algorithm to the evaluation of contrasted hepatocellular carcinoma in CT-images after transarterial chemoembolization

    PubMed Central

    2014-01-01

    Background Hepatocellular carcinoma is a primary tumor of the liver and involves different treatment modalities according to the tumor stage. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. Methods 63 computed tomography (CT) slices from 23 patients were assessed. Non-contrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Results Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter?>?5 cm), whereas agreement lesser than 1.0 cm was found for small-sized tumors. Conclusions Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small decrease for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria. PMID:25064234

  7. Densities of individually marked migrants away from the marking site to estimate population sizes: a test with three wader populations

    Microsoft Academic Search

    Bernard Spaans; Laurens Van Kooten; Jenny Cremer; Jutta Leyrer; Theunis Piersma

    2011-01-01

    Capsule Population estimates based on the mark–resighting method can be a useful alternative to population?wide counts.Aims To investigate whether the mark–resighting method can be used as an alternative to counts to estimate the size of wader populations.Methods Individual colour?marking and subsequent resightings allowed accurate estimates of annual survival for three populations of waders, on which basis we could estimate the

  8. Estimating CDM particle trajectories in the mildly non-linear regime of structure formation. Implications for the density field in real and redshift space

    SciTech Connect

    Tassev, Svetlin [Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States); Zaldarriaga, Matias, E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States)

    2012-12-01

    We obtain approximations for the CDM particle trajectories starting from Lagrangian Perturbation Theory. These estimates for the CDM trajectories result in approximations for the density in real and redshift space, as well as for the momentum density that are better than what standard Eulerian and Lagrangian perturbation theory give. For the real space density, we find that our proposed approximation gives a good cross-correlation ( > 95%) with the non-linear density down to scales almost twice smaller than the non-linear scale, and six times smaller than the corresponding scale obtained using linear theory. This allows for a speed-up of an order of magnitude or more in the scanning of the cosmological parameter space with N-body simulations for the scales relevant for the baryon acoustic oscillations. Possible future applications of our method include baryon acoustic peak reconstruction, building mock galaxy catalogs, momentum field reconstruction.

  9. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal. PMID:22420280

  10. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    SciTech Connect

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2012-08-15

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r= 0.82, p < 0.001) and processed (r= 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r= 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's {kappa}{>=} 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.

  11. VARLET and PHALET two wavelet based filter methods to separate stellar variation, orbital disturbances and instrumental effects from transit events in stellar light curves.

    NASA Astrophysics Data System (ADS)

    Grziwa, S.; Korth, J.; Pätzold, M.

    2014-04-01

    The space telescopes CoRoT and Kepler provided a huge number of high-resolution stellar light curves. The light curves are searched for transit signals which may be produced by planets when passing in front of the stellar disc. Various flux variations by star spots, pulsation, flares, glitches, hot pixels etc., however, dominate the stellar light curves and mask faint transit signals in particular of small exoplanets, which may lead to missed candidates or a high rate of false detections. Full automated filtering and detection algorithms only make it possible to manage the huge number of stellar light curves to search for transits. This will become even more important in the future missions PLATO and TESS. The Rheinisches Institut für Umweltforschung (RIUPF) as one of the CoRoT detection teams has developed two model independent wavelet based filter techniques VARLET and PHALET to reduce the flux variability in light curves in order to improve the search for transits. The VARLET filter separates faint transit signals from stellar variations without using a-priori information of the target star. VARLET distinguishes variations by frequency, amplitude and shape. VARLET separates the large-scale variations from the white noise. The transit feature, however, is not extracted and still contained in the noise time series which makes it now much easier to search for transits by the search routine EXOTRANS (Grziwa, S. et al.2012 [1]). The PHALET filter is used to separate periodic features with well-known periods independent of their shape. With PHALET it is possible to separate detected diluting binaries and other periodic effects (e.g. disturbances caused by the spacecraft motion in the Earth orbit). The main purpose, however, is to separate already detected transits to search for transits from additional planets in the stellar systems. RIU-PF searched all Kepler light curves for planetary transits by including VARLET and PHALET in the processing pipeline. The results of that search is compared with the public Kepler candidate list. About 93% of the 2232 included systems in the newest Kepler candidate list were confirmed. New planetary systems (more than 20) and additional candidates (more than 15) to already known multi-planet systems, however, could be added to the list and will be presented.

  12. A CROSS-MATCH OF 2MASS AND SDSS: NEWLY FOUND L AND T DWARFS AND AN ESTIMATE OF THE SPACE DENSITY OF T DWARFS

    E-print Network

    Metchev, Stanimir

    A CROSS-MATCH OF 2MASS AND SDSS: NEWLY FOUND L AND T DWARFS AND AN ESTIMATE OF THE SPACE DENSITY report new L and T dwarfs found in a cross-match of the SDSS Data Release 1 and 2MASS. Our simultaneous in addition to the 13 already known in the SDSS DR1 footprint. We also identify 22 new candidate and bona fide

  13. New Measurements of the Densities of Copper- and Nickel-Sulfide Liquids and Preliminary Estimates of the Partial Molar Volumes of Cu, Ni, S and O

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2001-12-01

    We present the results of density measurements in Ni- and Cu-sulfide liquids. Density measurements were performed in-situ at 1250° C under controlled-atmosphere conditions using the modified single-bob (MSB) Archimedean method. The MSB consists of a ~2 mm diameter rod with a ~6 mm long ~7 mm diameter cylindrical bob attached ~7 mm from the base of the rod. The bob and crucible were constructed from Yt stabilized zirconia to minimize reaction with the corrosive sulfide liquid. Zirconia density at temperature was calibrated against the known density of molten Cu metal (Drotning 1981, High Temp-High Press 13: 441-458). Density was determined by measuring buoyancy as a function of immersed volume. Buoyancy was measured with a 0.1 mg resolution analytical balance interfaced with a computer. The crucible is mounted on a micrometer "elevator" allowing regulation of immersion with .005 mm resolution. Temperature was measured with an S-type thermocouple in contact with the bottom of the crucible. We explored log(fO2) from -8.2 to -12.6 and log(fS2) from -1.9 to -3.3. Five measurements have been made so far. Cu-sulfide densities range from 6.32 to 6.36 g/cc and were reproducible to +/-0.7%. Measured Ni-sulfide densities were lower, ranging from 5.27 to 5.79 g/cc. Wetting problems in Ni-sulfide compositions made these measurements more difficult. Reproducibility in Ni-sulfide melts was roughly +/-5%. Measured density values were used to regress preliminary partial molar volumes of sulfide liquids in the Cu-Ni-S-O system. A linear least squares fit was derived from the five density measurements along with the densities of pure molten Cu (Drotning 1981, ibid.) and Ni (Nasch 1995, Phys Chem Liq 29: 43-58) at 1250° C. Melt compositions under experimental conditions were estimated using the thermodynamic model of Kress (submitted). The molar volume of the system (V) can be expressed as: V = 8.18 XCu + 7.38 XNi + 30.33 XS where XI is the mole fraction of component i. Oxygen contents were too low to estimate the partial molar volume of this component.

  14. Wavelet Based Cutting State Identification

    NASA Astrophysics Data System (ADS)

    Berger, B. S.; Minis, I.; Harley, J.; Rokni, M.; Papadopoulos, M.

    1998-06-01

    Chatter and non-chatter cutting states, associated with the orthogonal cutting of stiff metal cylinders, are identified through an analysis of the ratios of the mean absolute deviations of details of the biorthogonal 6,8 wavelet decomposition of cutting force measurements. Sequences of cutting experiments were performed in which either depth of cut or turning frequency was varied. For light and medium cutting the mean absolute deviations of the ratios of detailsd3andd4is less than 7 while for chatter it is greater than 15. The kurtosis of detaild3is shown to identify transitions to chatter.

  15. Estimating vertical plant area density profile and growth parameters of a wheat canopy at different growth stages using three-dimensional portable lidar imaging

    NASA Astrophysics Data System (ADS)

    Hosoi, Fumiki; Omasa, Kenji

    Vertical plant area density profiles of wheat ( Triticum aestivum L.) canopy at different growth stages (tillering, stem elongation, flowering, and ripening stages) were estimated using high-resolution portable scanning lidar based on the voxel-based canopy profiling method. The canopy was scanned three-dimensionally by laser beams emitted from several measuring points surrounding the canopy. At the ripening stage, the central azimuth angle was inclined about 23 ? to the row direction to avoid obstruction of the beam into the lower canopy by the upper part. Plant area density profiles were estimated, with root mean square errors of 0.28-0.79 m 2 m -3 at each growth stage and of 0.45 m 2 m -3 across all growth stages. Plant area index was also estimated, with absolute errors of 4.7%-7.7% at each growth stage and of 6.1% across all growth stages. Based on lidar-derived plant area density, the area of each type of organ (stem, leaves, ears) per unit ground area was related to the actual dry weight of each organ type, and regression equations were obtained. The standard errors of the equations were 4.1 g m -2 for ears and 26.6 g m -2 for stems and leaves. Based on these equations, the estimated total dry weight was from 63.3 to 279.4 g m -2 for ears and from 35.8 to 375.3 g m -2 for stems and leaves across the growth stages. Based on the estimated dry weight at ripening and the ratio of carbon to dry weight in wheat plants, the carbon stocks were 76.3 g C m -2 for grain, 225.0 g C m -2 for aboveground residue, and 301.3 g C m -2 for all aboveground organs.

  16. Predictions for Uranus from a radiometric Bode's law. [planetary magnetic moment estimated from radio emission flux density

    NASA Technical Reports Server (NTRS)

    Desch, M. D.; Kaiser, M. L.

    1984-01-01

    Determinations by spacecraft of the low-frequency radio spectra and radiation beam geometry of the magnetospheres of earth, Jupiter, and Saturn now permit a reliable assessment of the overall efficiency of the solar wind in stimulating intense, nonthermal radio bursts from these magnetospheres. It is found that earlier estimates of how magnetospheric radio output scales with the solar wind energy input must be greatly revised, with the result that, while the efficiency is much lower than previously thought, it is remarkably uniform from planet to planet. A 'radimetric Bode's law' is formulated from which a planet's magnetic moment can be estimated from its radio emission output. This law is applied to estimate the low-frequency radio power likely to be measured for Uranus by Voyager 2. It is shown how measurements of Uranus's radio flux can be used to estimate the planetary magnetic moment and solar wind stand-off distance before the in situ measurements.

  17. Late-Holocene climate evolution at the WAIS Divide site, West Antarctica: Bubble number-density estimates

    USGS Publications Warehouse

    Fegyveresi, J.M.; Alley, R.B.; Spencer, M.K.; Fitzpatrick, J.J.; Steig, E.J.; White, J.W.C.; McConnell, J.R.; Taylor, K.C.

    2011-01-01

    A surface cooling of ???1.7??C occurred over the ???two millennia prior to ???1700 CE at the West Antarctic ice sheet (WAIS) Divide site, based on trends in observed bubble number-density of samples from the WDC06A ice core, and on an independently constructed accumulation-rate history using annual-layer dating corrected for density variations and thinning from ice flow. Density increase and grain growth in polar firn are both controlled by temperature and accumulation rate, and the integrated effects are recorded in the number-density of bubbles as the firn changes to ice. Numberdensity is conserved in bubbly ice following pore close-off, allowing reconstruction of either paleotemperature or paleo-accumulation rate if the other is known. A quantitative late-Holocene paleoclimate reconstruction is presented for West Antarctica using data obtained from the WAIS Divide WDC06A ice core and a steady-state bubble number-density model. The resultant temperature history agrees closely with independent reconstructions based on stable-isotopic ratios of ice. The ???1.7??C cooling trend observed is consistent with a decrease in Antarctic summer duration from changing orbital obliquity, although it remains possible that elevation change at the site contributed part of the signal. Accumulation rate and temperature dropped together, broadly consistent with control by saturation vapor pressure.

  18. An adaptive computer vision technique for estimating the biomass and density of loblolly pine plantations using digital orthophotography and LiDAR imagery

    NASA Astrophysics Data System (ADS)

    Bortolot, Zachary J.

    Forests have been proposed as a means of reducing atmospheric carbon dioxide levels due to their ability to store carbon as biomass. To quantify the amount of atmospheric carbon sequestered by forests, biomass and density estimates are oven needed. This study develops, implements, and tests an individual tree-based algorithm for obtaining forest density and biomass using orthophotographs and small footprint LiDAR imagery. It was designed to work with a range of forests and image types without modification, which is accomplished by using generic properties of trees found in many types of images. Multiple parameters are employed to determine how these generic properties are used. To set these parameters, training data is used in conjunction with an optimization algorithm (a modified Nelder-Mead simplex algorithm or a genetic algorithm). The training data consist of small images in which density and biomass are known. A first test of this technique was performed using 25 circular plots (radius = 15 m) placed in young pine plantations in central Virginia, together with false color orthophotograph (spatial resolution = 0.5 m) or small footprint LiDAR (interpolated to 0.5 m) imagery. The highest density prediction accuracies (r2 up to 0.88, RMSE as low as 83 trees/ha) were found for runs where photointerpreted densities were used for training and testing. For tests run using density measurements made on the ground, accuracies were consistency higher for orthophotograph-based results than for LiDAR-based results, and were higher for trees with DBH ?10cm than for trees with DBH ?7 cm. Biomass estimates obtained by the algorithm using LiDAR imagery had a lower RMSE (as low as 15.6 t/ha) than most comparable studies. The correlations between the actual and predicted values (r2 up to 0.64) were lower than comparable studies, but were generally highly significant (p ? 0.05 or 0.01). In all runs there was no obvious sensitive to which training and testing data were selected. Methods were evaluated for combining predictions made using different parameter sets obtained after training using identical data. It was found that averaging the predictions produced improved results. After training using density estimates from the human photointerpreter, 89% of the trees located by the algorithm corresponded to trees found by the human photointerpreter. A comparison of the two optimization techniques found them to be comparable in speed and effectiveness.

  19. Adaptive Estimation of SpeedAdaptive Estimation of Speed--Density RelationsDensity Relations for Online Network Traffic Predictionfor Online Network Traffic Predictionfor Online Network Traffic Predictionfor Online Network Traffic Prediction

    E-print Network

    Bertini, Robert L.

    /19/2009 #12;421 3Research Problem Given: 4Given: Real-time traffic sensor data stream (on a subset of links of )(B )(B )(B )(B OutputInput Linear Dynamic System tX tY( ) tXBv p y backward shift operator B 2 model (LWR model) Based on equilibrium speed-density relation (i.e., Modified G hi ld ' d l

  20. Volume density of earthworm burrows in compacted cores of soil as estimated by direct and indirect methods

    Microsoft Academic Search

    J. R. Hirth; B. M. McKenzie; J. M. Tisdall

    1996-01-01

    After earthworms of the species Aporrectodea caliginosa and A. rosea had burrowed in compacted cores of soil for 68 days the cores were sectioned horizontally. The upper surface of each sectioned layer of soil was photographed before it was dissected and the dimensions of all burrows within the layer measured. Volume densities calculated from the direct measurement of burrows were