For comprehensive and current results, perform a real-time search at Science.gov.

1

NASA Astrophysics Data System (ADS)

Stand density, expressed as the number of trees per unit area, is an important forest management parameter. It is used by foresters to evaluate regeneration, to assess the effect of forest management measures, or as an indicator variable for other stand parameters like age, basal area, and volume. In this work, a new density estimation procedure is proposed based on wavelet analysis of very high resolution optical imagery. Wavelet coefficients are related to reference densities on a per segment basis, using an artificial neural network. The method was evaluated on artificial imagery and two very high resolution datasets covering forests in Heverlee, Belgium and Les Beaux de Provence, France. Whenever possible, the method was compared with the well-known local maximum filter. Results show good correspondence between predicted and true stand densities. The average absolute error and the correlation between predicted and true density was 149 trees/ha and 0.91 for the artificial dataset, 100 trees/ha and 0.85 for the Heverlee site, and 49 trees/ha and 0.78 for the Les Beaux de Provence site. The local maximum filter consistently yielded lower accuracies, as it is essentially a tree localization tool, rather than a density estimator.

van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Wulf, Robert R.

2011-01-01

2

An efficient wavelet-based motion estimation algorithm

NASA Astrophysics Data System (ADS)

In this paper, we propose a wavelet-based fast motion estimation algorithm for video sequence encoding with a low bit-rate. By using one of the properties of the wavelet transform, multi-resolution analysis (MRA), and the spatial interpolation of an image, we can simultaneously reduce the prediction error and the computational complexity inherent in video sequence encoding. In addition, by defining a significant block(SB) based on the differential information of the wavelet coefficients between successive frames, the proposed algorithm enables us to make up for the increase in the number of motion vectors when the MRME algorithm is used. As a result, we are not only able to improve the peak signal-to-noise ratio (PSNR), but also reduce the computational complexity by up to 67%.

Bae, Jin-Woo; Lee, Seung-Hyun; Yoo, Ji-Sang

2004-11-01

3

Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

NASA Astrophysics Data System (ADS)

This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

Cifter, Atilla

2011-06-01

4

Wavelet-based image estimation: an empirical Bayes approach using Jeffrey's noninformative prior

The sparseness and decorrelation properties of the discrete wavelet transform have been exploited to develop powerful denoising methods. However, most of these methods have free parameters which have to be adjusted or estimated. In this paper, we propose a wavelet-based denoising technique without any free parameters; it is, in this sense, a \\

Mário A. T. Figueiredo; Robert D. Nowak

2001-01-01

5

Estimation of Modal Parameters Using a Wavelet-Based Approach

NASA Technical Reports Server (NTRS)

Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

Lind, Rick; Brenner, Marty; Haley, Sidney M.

1997-01-01

6

NASA Astrophysics Data System (ADS)

This paper analyzes the statistical dependencies between wavelet coefficients in wavelet-based decompositions of 3D meshes. These dependencies are estimated using the interband, intraband and composite mutual information. For images, the literature shows that the composite and the intraband mutual information are approximat-ely equal, and they are both significantly larger than the interband mutual information. This indicates that intraband coding designs should be favored over the interband zerotree-based coding approaches, in order to better capture the residual dependencies between wavelet coefficients. This motivates the design of intraband wavelet-based image coding schemes, such as quadtree-limited (QT-L) coding, or the state-of-the-art JPEG-2000 scalable image coding standard. In this paper, we empirically investigate whether these findings hold in case of meshes as well. The mutual information estimation results show that, although the intraband mutual information is significantly larger than the interband mutual information, the composite case cannot be discarded, as the composite mutual information is also significantly larger than the intraband mutual information. One concludes that intraband and composite codec designs should be favored over the traditional interband zerotree-based coding approaches commonly followed in scalable coding of meshes.

Satti, Shahid M.; Denis, Leon; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

2009-02-01

7

Wavelet-based linear-response time-dependent density-functional theory

NASA Astrophysics Data System (ADS)

Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BIGDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program DEMON2K for the calculation of electronic absorption spectra of N2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BIGDFT than for DEMON2K. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BIGDFT, while all virtual orbitals are included in TD-DFT calculations in DEMON2K. As a reality check, we report the X-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidazo[1, 2-a]pyridin-3-amine.

Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.; Philouze, Christian; Balakirev, Maxim Y.

2012-06-01

8

As is well-known, a heteroskedasticity and autocorrelation consistent covariance matrix is proportional to a spectral density matrix at frequency zero and can be consistently estimated by such popular kernel methods as those of Andrews-Newey-West. In practice, it is difficult to estimate the spectral density matrix if it has a peak at frequency zero, which can arise when there is strong

Yongmiao Hong; Jin Lee

2000-01-01

9

Fetal QRS detection and heart rate estimation: a wavelet-based approach.

Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. PMID:25070210

Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

2014-08-01

10

A new wavelet based algorithm for estimating respiratory motion rate using UWB radar

UWB signals have become attractive for their particular advantage of having narrow pulse width which makes them suitable for remote sensing of vital signals. In this paper a novel approach to estimate periodic motion rates, using ultra wide band (UWB) signals is proposed. The proposed algorithm which is based on wavelet transform is used as a non-contact tool for measurement

Mehran Baboli; Seyed Ali Ghorashi; Namdar Saniei; Alireza Ahmadian

2009-01-01

11

Estimation of shock induced vorticity on irregular gaseous interfaces: a wavelet-based approach

NASA Astrophysics Data System (ADS)

We study the interaction of a shock with a density-stratified gaseous interface (Richtmyer Meshkov instability) with localized jagged and irregular perturbations, with the aim of developing an analytical model of the vorticity deposition on the interface immediately after the passage of the shock. The jagged perturbations, meant to simulate machining errors on the surface of a laser fusion target, are characterized using Haar wavelets. Numerical solutions of the Euler equations show that the vortex sheet deposited on the jagged interface rolls into multiple mushroom-shaped dipolar structures which begin to merge before the interface evolves into a bubble-spike structure. The peaks in the distribution of x-integrated vorticity (vorticity integrated in the direction of the shock motion) decay in time as their bases widen, corresponding to the growth and merger of the mushrooms. However, these peaks were not seen to move significantly along the interface at early times i.e. t < 10 ?, where ? is the interface traversal time of the shock. We tested our analytical model against inviscid simulations for two test cases a Mach 1.5 shock interacting with an interface with a density ratio of 3 and a Mach 10 shock interacting with a density ratio of 10. We find that this model captures the early time (t/? ˜ 1) vorticity deposition (as characterized by the first and second moments of vorticity distributions) to within 5% of the numerical results.

Ray, J.; Jameson, L.

2005-11-01

12

Wavelet-based estimation of the hemodynamic responses in diffuse optical imaging.

Diffuse optical imaging uses light to provide a surrogate measure of neuronal activation through the hemodynamic responses. The relative low absorption of near-infrared light enables measurements of hemoglobin changes at depths reaching the first centimeter of the cortex. The rapid rate of acquisition and the access to both oxy and deoxy-hemoglobin leads to new challenges when trying to uncouple physiology from the signal of interest. In particular, recent work provided evidence of the presence of a 1/f noise structure in optical signals and showed that a general linear model based on wavelets can be used to decorrelate the structured noise and provide a superior estimator of response amplitude when compared with conventional techniques. In this work the wavelet techniques are extended to recover the full temporal shape of the hemodynamic responses. A comparison with other models is provided as well as a case study on finger-tapping data. PMID:20494609

Lina, J M; Matteau-Pelletier, C; Dehaes, M; Desjardins, M; Lesage, F

2010-08-01

13

Wavelet-based Evapotranspiration Forecasts

NASA Astrophysics Data System (ADS)

Providing a reliable short-term forecast of evapotranspiration (ET) could be a valuable element for improving the efficiency of irrigation water delivery systems. In the last decade, wavelet transform has become a useful technique for analyzing the frequency domain of hydrological time series. This study shows how wavelet transform can be used to access statistical properties of evapotranspiration. The objective of the research reported here is to use wavelet-based techniques to forecast ET up to 16 days ahead, which corresponds to the LANDSAT 7 overpass cycle. The properties of the ET time series, both physical and statistical, are examined in the time and frequency domains. We use the information about the energy decomposition in the wavelet domain to extract meaningful components that are used as inputs for ET forecasting models. Seasonal autoregressive integrated moving average (SARIMA) and multivariate relevance vector machine (MVRVM) models are coupled with the wavelet-based multiresolution analysis (MRA) results and used to generate short-term ET forecasts. Accuracy of the models is estimated and model robustness is evaluated using the bootstrap approach.

Bachour, R.; Maslova, I.; Ticlavilca, A. M.; McKee, M.; Walker, W.

2012-12-01

14

Stochastics and Statistics A wavelet-based spectral procedure for steady-state

Stochastics and Statistics A wavelet-based spectral procedure for steady-state simulation analysis online 27 June 2005 Abstract We develop WASSP, a wavelet-based spectral method for steady-state of the thresholded wavelet coefficients, WASSP computes estimators of the batch means log- spectrum and the steady-state

15

Airborne Crowd Density Estimation

NASA Astrophysics Data System (ADS)

This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

Meynberg, O.; Kuschk, G.

2013-10-01

16

Wavelet-based semblance filtering

NASA Astrophysics Data System (ADS)

Fourier transform-based semblance analysis compares two time series on the basis of their phase as a function of frequency. This approach can be extended using wavelets to allow the phase comparison of two datasets to be performed as a function of both time and wavelength. This paper further extends the previous work in two directions; firstly it demonstrates how to display the correlation between multiple (not just two) datasets, and secondly it introduces wavelet-based semblance filtering which allows a pair of datasets to be processed to extract components with any degree of correlation. Matlab source code is available from the IAMG server at www.iamg.org.

Cooper, G. R. J.

2009-10-01

17

Wavelet-based polarimetry analysis

NASA Astrophysics Data System (ADS)

Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

2014-06-01

18

Shape constrained kernel density estimation

In this paper, a method for estimating monotone, convex and log-concave densities is proposed. The estimation procedure consists of an unconstrained kernel estimator which is modified in a second step with respect to the desired shape constraint by using monotone rearrangements. It is shown that the resulting estimate is a density itself and shares the asymptotic properties of the unconstrained

Melanie Birke

2009-01-01

19

FAST GEM WAVELET-BASED IMAGE DECONVOLUTION ALGORITHM

The paper proposes a new wavelet-based Bayesian approach to image deconvolution, under the space-invariant blur and ad- ditive white Gaussian noise assumptions. Image deconvolution exploits the well known sparsity of the wavelet coefficients, de- scribed by heavy-tailed priors. The present approach admits any prior given by a linear (finite of infinite) combination of Gaussian densities. To compute the maximum a

M. B. Dias; Torre Norte

2003-01-01

20

Density Estimation with Mercer Kernels

NASA Technical Reports Server (NTRS)

We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

Macready, William G.

2003-01-01

21

Wavelet-based modal analysis for time-variant systems

NASA Astrophysics Data System (ADS)

The paper presents algorithms for modal identification of time-variant systems. These algorithms utilise the wavelet-based Frequency Response Function, and lead to estimation of all three modal parameters, i.e. natural frequencies, damping and mode shapes. The method presented utilises random impact excitation and signal post-processing based on the crazy climbers Algorithm. The method is validated using simulated and experimental data from vibration time-variant systems. The results show that the method captures correctly the dynamics of the analysed systems, leading to correct modal parameter identification.

Dziedziech, K.; Staszewski, W. J.; Uhl, T.

2015-01-01

22

Wavelet Based Estimation for Univariate Stable Laws

-LMC, University Joseph Fourier, BP 53, 38041 Grenoble Cedex 9, France Andrey Feuerverger Department of Statistics Avenue de l'Europe, 38330 Monbonnot Saint Martin, France. Abstract Stable distributions are characterized features of the processes being modeled. Hence, in the field of statistics for example, wavelets have been

Antoniadis, Anestis

23

Wavelet Based Estimation for Univariate Stable Laws

-LMC, University Joseph Fourier, BP 53, 38041 Grenoble Cedex 9, France Andrey Feuerverger Department of Statistics Avenue de l'Europe, 38330 Monbonnot Saint Martin, France. Abstract Stable distributions are characterized. Hence, in statistics for example, wavelets have been used primarily to deal with problems

GonÃ§alves, Paulo

24

Density estimation for color images

NASA Astrophysics Data System (ADS)

Color histograms computed from the normalized and hue color space are negatively affected by sensor noise due to the instability of these color space transforms at many RGB values. To suppress the effect of sensor noise, in this paper density estimations are computed using variable kernels. To that end, models are proposed for the propagation of sensor noise through the normalized and hue colors. As a result, not only the hue and normalized color values are known, but also the associated uncertainty. This twofold information is used to derive the parameterization of the variable kernel used for the density estimation. It is empirically verified that the proposed method compares favorable to the traditional histogram.

Stokman, Harro M.; Gevers, Theo

2001-01-01

25

Multivariate Density Estimation: An SVM Approach

We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. ...

Mukherjee, Sayan

1999-04-01

26

Wavelet-based ultrasound image denoising: performance analysis and comparison.

Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

2011-01-01

27

Wavelet-based acoustic recognition of aircraft

We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

Dress, W.B.; Kercel, S.W.

1994-09-01

28

Discrimination of walking patterns using wavelet-based fractal analysis.

In this paper, we attempted to classify the acceleration signals for walking along a corridor and on stairs by using the wavelet-based fractal analysis method. In addition, the wavelet-based fractal analysis method was used to evaluate the gait of elderly subjects and patients with Parkinson's disease. The triaxial acceleration signals were measured close to the center of gravity of the body while the subject walked along a corridor and up and down stairs continuously. Signal measurements were recorded from 10 healthy young subjects and 11 elderly subjects. For comparison, two patients with Parkinson's disease participated in the level walking. The acceleration signal in each direction was decomposed to seven detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 7 to 1 were calculated. The fractal dimension of the acceleration signal was then estimated from the slope of the variance progression. The fractal dimensions were significantly different among the three types of walking for individual subjects (p < 0.01) and showed a high reproducibility. Our results suggest that the fractal dimensions are effective for classifying the walking types. Moreover, the fractal dimensions were significantly higher for the elderly subjects than for the young subjects (p < 0.01). For the patients with Parkinson's disease, the fractal dimensions tended to be higher than those of healthy subjects. These results suggest that the acceleration signals change into a more complex pattern with aging and with Parkinson's disease, and the fractal dimension can be used to evaluate the gait of elderly subjects and patients with Parkinson's disease. PMID:12503784

Sekine, Masaki; Tamura, Toshiyo; Akay, Metin; Fujimoto, Toshiro; Togawa, Tatsuo; Fukui, Yasuhiro

2002-09-01

29

Density Estimation Trees in High Energy Physics

Density Estimation Trees can play an important role in exploratory data analysis for multidimensional, multi-modal data models of large samples. I briefly discuss the algorithm, a self-optimization technique based on kernel density estimation, and some applications in High Energy Physics.

Anderlini, Lucio

2015-01-01

30

Wavelet-based analysis of circadian behavioral rhythms.

The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

Leise, Tanya L

2015-01-01

31

NASA Astrophysics Data System (ADS)

A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an effective, practical method to remove the baseline drift of GREATEM signals and its performance is significantly superior to the interpolation method.

Wang, Yuan ^{1}^{2}^{1}^{3}^{1}^{2}^{1}

2013-09-01

32

Topics in global convergence of density estimates

NASA Technical Reports Server (NTRS)

The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

Devroye, L.

1982-01-01

33

Optimization of k nearest neighbor density estimates

Nonparametric density estimation using thek-nearest-neighbor approach is discussed. By developing a relation between the volume and the coverage of a region, a functional form for the optimumkin terms of the sample size, the dimensionality of the observation space, and the underlying probability distribution is obtained. Within the class of density functions that can be made circularly symmetric by a linear

K. Fukunaga; L. Hostetler

1973-01-01

34

NEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye

(k)d), and X(k) is the k-th nearest neighbor of x when points are ordered by increasing values of the product dNEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye School of Computer Science Mc Montreal, Canada H3G 1M8 Abstract. Let X be an IRd -valued random variable with unknown density f. Let X1

Devroye, Luc

35

A wavelet based investigation of long memory in stock returns

NASA Astrophysics Data System (ADS)

Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.

Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.

2012-04-01

36

ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

37

Consistency of the local kernel density estimator

The consistency of the local kernel density estimator is proved. This nonparametric estimator is distinguished by its use of scaling matrices which are random and which may vary for each sample point. Its applications include adaptive construction of importance sampling functions.

Geof H. Givens

1995-01-01

38

NEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye

are ordered by increasing values of the product Q d j=1 |x j -X (k)j |, and k = o(log n), k ##. The auxiliaryNEW MULTIVARIATE PRODUCT DENSITY ESTIMATORS Luc Devroye School of Computer Science Mc Montreal, Canada H3G 1M8 Abstract. Let X be an IR d Âvalued random variable with unknown density f . Let X

Devroye, Luc

39

Transformation based density estimation For weighted distributions

In this paper we consider the estimation of a density f on the basis of random sample from a weighted distribution G with density g given by ,where w(u) > 0 for all u and . A special case of this situation is that of length-biased sampling, where w(x) = x. In this paper we examine a simple transformation-based approach

Hammou El Barmi; Jeffrey S. Simonoff

2000-01-01

40

Estimating and Interpreting Probability Density Functions

NSDL National Science Digital Library

This 294-page document from the Bank for International Settlements stems from the Estimating and Interpreting Probability Density Functions workshop held on June 14, 1999. The conference proceedings, which may be downloaded as a complete document or by chapter, are divided into two sections: "Estimation Techniques" and "Applications and Economic Interpretation." Both contain papers presented at the conference. Also included are a list of the program participants with their affiliations and email addresses, a forward, and background notes.

41

Calibrated Measures for Breast Density Estimation

Rationale and Objectives Breast density is a significant breast cancer risk factor measured from mammograms. Evidence suggests that the spatial variation in mammograms may also be associated with risk. We investigated the variation in calibrated mammograms as a breast cancer risk factor and explored its relationship with other measures of breast density using full field digital mammography (FFDM). Materials and Methods A matched case-control analysis was used to assess a spatial variation breast density measure in calibrated FFDM images, normalized for the image acquisition technique variation. Three measures of breast density were compared between cases and controls: (a) the calibrated average measure, (b) the calibrated variation measure, and (c) the standard percentage of breast density (PD) measure derived from operator-assisted labeling. Linear correlation and statistical relationships between these three breast density measures were also investigated. Results Risk estimates associated with the lowest to highest quartiles for the calibrated variation measure were greater in magnitude [odds ratios: 1.0 (ref.), 3.5, 6.3, and 11.3] than the corresponding risk estimates for quartiles of the standard PD measure [odds ratios: 1.0 (ref.), 2.3, 5.6, and 6.5] and the calibrated average measure [odds ratios: 1.0 (ref.), 2.4, 2.3, and 4.4]. The three breast density measures were highly correlated, showed an inverse relationship with breast area, and related by a mixed distribution relationship. Conclusion The three measures of breast density capture different attributes of the same data field. These preliminary findings indicate the variation measure is a viable automated method for assessing breast density. Insights gained by this work may be used to develop a standard for measuring breast density. PMID:21371912

Heine, John J.; Cao, Ke; Rollison, Dana E.

2011-01-01

42

Sampling, Density Estimation and Spatial Relationships

NSDL National Science Digital Library

This resource serves as a tool used for instructing a laboratory exercise in ecology. Students obtain hands-on experience using techniques such as, mark-recapture and density estimation and organisms such as, zooplankton and fathead minnows. This exercise is suitable for general ecology and introductory biology courses.

Maggie Haag (University of Alberta; )

1998-01-01

43

PERSPECTIVES Estimating delayed density-dependent mortality

PERSPECTIVES Estimating delayed density-dependent mortality in sockeye salmon (Oncorhynchus nerka in many populations of sockeye salmon (Oncorhynchus nerka). We used a meta-analytical approach to test dans de nombreuses populations de saumon rouge (Oncorhynchus nerka). Les auteurs ont utilisé une

Myers, Ransom A.

44

Estimating density of Florida Key deer

for this species since 1968; however, a need to evaluate the precision of existing and alternative survey methods (i.e., road counts, mark-recapture, infrared-triggered cameras [ITC]) was desired by USFWS. I evaluated density estimates from unbaited ITCs and road...

Roberts, Clay Walton

2006-08-16

45

On Sequential Data-Driven Density Estimation

The theory and methods of minimax and sequential inferences, pioneered by Abraham Wald in 1940's, shaped the way statisticians see the statistics today. This article employs the Wald approaches together with the modern oracle analysis to develop the theory and methods of a sharp minimax adaptive sequential density estimation. In particular, it proves a long-standing conjecture about a sufficient condition

Sam Efromovich

2004-01-01

46

IMPROVED DENSITY ESTIMATORS FOR INVERTIBLE LINEAR PROCESSES

Sciences Binghamton University Binghamton, NY 13902-6000, USA anton@math.binghamton.edu Wolfgang Wefelmeyer Mathematisches Institut UniversitÂ¨at zu KÂ¨oln Weyertal 86-90 50931 KÂ¨oln, Germany wefelm@math.uni-koeln.de Key processes can be represented as a convolution of innovation-based densities, and it can be estimated

Schick, Anton

47

Estimating animal population density using passive acoustics

Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

2013-01-01

48

Estimating animal population density using passive acoustics.

Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

2013-05-01

49

Conditional Density Estimation in Measurement Error Problems.

This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

Wang, Xiao-Feng; Ye, Deping

2015-01-01

50

Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures

Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures Don Percival Applied monitoring program recently upgraded with perma- nent installation of vertical profilers at Lake Wivenhoe dam in a subtropical dam as a function of time and depth Â· will concentrate on a 600+ day segment of temperature fluc

Percival, Don

51

A tree projection algorithm for wavelet-based

A tree projection algorithm for wavelet-based sparse approximation Andrew Thompson Duke University, North Carolina, USA joint with Coralia Cartis (University of Edinburgh) #12;Wavelet trees · Discrete wavelet transforms (DWTs) have an inherent tree structure #12;Wavelet trees · Discrete wavelet transforms

Thompson, Andrew

52

Wavelet-based Feature Extraction for Handwritten Numerals

Wavelet-based Feature Extraction for Handwritten Numerals Diego Romero, Ana Ruedin and Leticia recognition, that relies on the extraction of multi- scale features to characterize the classes-dependent bandpass filters, and give information on local orientation of the strokes. Extracted features: · A shape

Figueira, Santiago

53

Wavelet-based analysis of blood pressure dynamics in rats

NASA Astrophysics Data System (ADS)

Using a wavelet-based approach, we study stress-induced reactions in the blood pressure dynamics in rats. Further, we consider how the level of the nitric oxide (NO) influences the heart rate variability. Clear distinctions for male and female rats are reported.

Pavlov, A. N.; Anisimov, A. A.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznecova, A. S.; Matasova, E. G.

2009-02-01

54

Wavelet based edge detection method for analysis of coronary angiograms

The assessment of coronary anatomy is one of the prime determinants in choosing medical or interventional therapy for patients with ischemic heart disease. We report a wavelet based method of coronary border identification which has the advantage of the detection of the edges at different scales (the image changes are computed in a variable neighborhood), unlike the conventional methods where

A. Bezerianos; A. Munteanul; D. Alexopoulos; G. Panayiotakis; P. Cristea

1995-01-01

55

Coding sequence density estimation via topological pressure.

We give a new approach to coding sequence (CDS) density estimation in genomic analysis based on the topological pressure, which we develop from a well known concept in ergodic theory. Topological pressure measures the 'weighted information content' of a finite word, and incorporates 64 parameters which can be interpreted as a choice of weight for each nucleotide triplet. We train the parameters so that the topological pressure fits the observed coding sequence density on the human genome, and use this to give ab initio predictions of CDS density over windows of size around 66,000 bp on the genomes of Mus Musculus, Rhesus Macaque and Drososphilia Melanogaster. While the differences between these genomes are too great to expect that training on the human genome could predict, for example, the exact locations of genes, we demonstrate that our method gives reasonable estimates for the 'coarse scale' problem of predicting CDS density. Inspired again by ergodic theory, the weightings of the nucleotide triplets obtained from our training procedure are used to define a probability distribution on finite sequences, which can be used to distinguish between intron and exon sequences from the human genome of lengths between 750 and 5,000 bp. At the end of the paper, we explain the theoretical underpinning for our approach, which is the theory of Thermodynamic Formalism from the dynamical systems literature. Mathematica and MATLAB implementations of our method are available at http://sourceforge.net/projects/topologicalpres/ . PMID:24448658

Koslicki, David; Thompson, Daniel J

2015-01-01

56

Bird population density estimated from acoustic signals

Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

Dawson, D.K.; Efford, M.G.

2009-01-01

57

Classification of Melanoma Lesions Using Wavelet-Based Texture Analysis

This paper presents a wavelet-based texture analysis method for classification of melanoma. The method applies tree-structured wavelet transform on different color channels of red, green, blue and luminance of dermoscopy images, and employs various statistical measures and ratios on wavelet coefficients. Feature extraction and a two-stage feature selection method, based on entropy and correlation, were applied to a train set

Rahil Garnavi; Mohammad Aldeen; James Bailey

2010-01-01

58

Wavelet-based statistical signal processing using hidden Markov models

Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are

Matthew S. Crouse; Robert D. Nowak; Richard G. Baraniuk

1998-01-01

59

Fast wavelet based algorithms for linear evolution equations

NASA Technical Reports Server (NTRS)

A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

1992-01-01

60

Non-destructive wavelet-based despeckling in SAR images

NASA Astrophysics Data System (ADS)

The suggested wavelet-based despeckling method for multi-look SAR images does not use any thresholding and window processing to avoid ringing artifacts, blurring, fusion of edges, etc. Instead, the logical operation of comparison is applied to wavelet coefficients which are presented in spatial oriented trees (SOTs) of wavelet decomposition calculated for one and the same region of the earth surface during SAR spacecraft flight. Fusion of SAR images is decided by keeping the smallest wavelet coefficients from different SOTs in high frequency subbands (details). The wavelet coefficients related to the low frequency subband (approximation) are processed by another special logical operation providing with a good smoothing. It is because the described procedure depends on properties of the chosen wavelet basis then the library of wavelet bases is applied. The procedure is repeated for each wavelet basis. To select the best SOTs (and hence, the best wavelet basis) there is the special cost function which considers the SOTs as so-called coherent structures and shows which of wavelet bases brings the maximum entropy. The results of computer modeling and comparison with few well-known despeckling procedures have shown the superb quality of the proposed method in the sense of different criteria as PSNR, SSIM, etc.

Bekhtin, Yuri S.; Bryantsev, Andrey A.; Malebo, Damiao P.; Lupachev, Alexey A.

2014-10-01

61

Estimating stellar mean density through seismic inversions

NASA Astrophysics Data System (ADS)

Context. Determining the mass of stars is crucial both for improving stellar evolution theory and for characterising exoplanetary systems. Asteroseismology offers a promising way for estimating the stellar mean density. When combined with accurate radii determinations, such as are expected from Gaia, this yields accurate stellar masses. The main difficulty is finding the best way to extract the mean density of a star from a set of observed frequencies. Aims: We seek to establish a new method for estimating the stellar mean density, which combines the simplicity of a scaling law while providing the accuracy of an inversion technique. Methods: We provide a framework in which to construct and evaluate kernel-based linear inversions that directly yield the mean density of a star. We then describe three different inversion techniques (SOLA and two scaling laws) and apply them to the Sun, several test cases and three stars, ? Cen B, HD 49933 and HD 49385, two of which are observed by CoRoT. Results: The SOLA (subtractive optimally localised averages) approach and the scaling law based on the surface correcting technique described by Kjeldsen et al. (2008, ApJ, 683, L175) yield comparable results that can reach an accuracy of 0.5% and are better than scaling the large frequency separation. The reason for this is that the averaging kernels from the two first methods are comparable in quality and are better than what is obtained with the large frequency separation. It is also shown that scaling the large frequency separation is more sensitive to near-surface effects, but is much less affected by an incorrect mode identification. As a result, one can identify pulsation modes by looking for an ? and n assignment which provides the best agreement between the results from the large frequency separation and those from one of the two other methods. Non-linear effects are also discussed, as is the effects of mixed modes. In particular, we show that mixed modes bring little improvement to the mean density estimates because of their poorly adapted kernels.

Reese, D. R.; Marques, J. P.; Goupil, M. J.; Thompson, M. J.; Deheuvels, S.

2012-03-01

62

NASA Astrophysics Data System (ADS)

In the context of multiscale seismic analysis of complex reflectors, that takes benefit from broad-band frequency range considerations, we perform a wavelet-based method to merge multiresolution seismic sources based on generalized Lévy-alpha stable functions. The frequency bandwidth limitation of individual seismic sources induces distortions in wavelet responses (WRs), and we show that Gaussian fractional derivative functions are optimal wavelets to fully correct for these distortions in the merged frequency range. The efficiency of the method is also based on a new wavelet parametrization, that is the breadth of the wavelet, where the dominant dilation is adapted to the wavelet formalism. As a first demonstration to merge multiresolution seismic sources, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed device and we show that both can now be perfectly merged into an equivalent seismic source with a broad-band frequency bandwidth (220-2200 Hz). Taking advantage of this new multiresolution seismic data fusion, the potential of the generalized wavelet-based method allows reconstructing the acoustic impedance profile of the subseabed, based on the inverse wavelet transform properties extended to the source-corrected WR. We highlight that the fusion of seismic sources improves the resolution of the impedance profile and that the density structure of the subseabed can be assessed assuming spatially homogeneous large scale features of the subseabed physical properties.

Ker, S.; Le Gonidec, Y.; Gibert, D.

2013-11-01

63

Traffic characterization and modeling of wavelet-based VBR encoded video

Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

1997-07-01

64

ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER

This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...

65

A Maximum Likelihood Approach to Density Estimation with Semidefinite Programming

Density estimation plays an important and fundamental role in pattern recognition, machine learning, and statistics. In this article, we develop a parametric approach to univariate (or low-dimensional) density estimation based on semidefinite programming (SDP). Our density model is expressed as the product of a nonnegative polynomial and a base density such as normal distribution, exponential distribution, and uniform distribution. When

Tadayoshi Fushiki; Shingo Horiuchi; Takashi Tsuchiya

2006-01-01

66

Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

2005-04-11

67

Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

NASA Astrophysics Data System (ADS)

The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

2013-02-01

68

Bounding the L1 Distance in Nonparametric Density Estimation

Let X1, X2, ..., Xn be i.i.d. random variables with common unknown density function f. We are interested in estimating the unknown density f with bounded Mean Integrated Absolute Error (MIAE). Devroye and Gyorfi (1985, Nonparametric Density Estimation: The L1 View, Wiley, New York) obtained asymptotic bounds for the MIAE in estimating f by a kernel estimate fn. Using these

Subrata Kundu; Adam T. Martinsek

1997-01-01

69

Probability Density Estimation from Optimally Condensed Data Samples

The requirement to reduce the computational cost of evaluating a point probability density estimate when employing a Parzen window estimator is a well-known problem. This paper presents the Reduced Set Density Estimator that provides a kernel- based density estimator which employs a small percentage of the available data sample and is optimal in the L2 sense. While only requiringOÖN 2Ü

Mark Girolami; Chao He

2003-01-01

70

Wavelet-Based Signal and Image Processing for Target Recognition

NASA Astrophysics Data System (ADS)

The PI visited NSWC Dahlgren, VA, for six weeks in May-June 2002 and collaborated with scientists in the G33 TEAMS facility, and with Marilyn Rudzinsky of T44 Technology and Photonic Systems Branch. During this visit the PI also presented six educational seminars to NSWC scientists on various aspects of signal processing. Several items from the grant proposal were completed, including (1) wavelet-based algorithms for interpolation of 1-d signals and 2-d images; (2) Discrete Wavelet Transform domain based algorithms for filtering of image data; (3) wavelet-based smoothing of image sequence data originally obtained for the CRITTIR (Clutter Rejection Involving Temporal Techniques in the Infra-Red) project. The PI visited the University of Stellenbosch, South Africa to collaborate with colleagues Prof. B.M. Herbst and Prof. J. du Preez on the use of wavelet image processing in conjunction with pattern recognition techniques. The University of Stellenbosch has offered the PI partial funding to support a sabbatical visit in Fall 2003, the primary purpose of which is to enable the PI to develop and enhance his expertise in Pattern Recognition. During the first year, the grant supported publication of 3 referred papers, presentation of 9 seminars and an intensive two-day course on wavelet theory. The grant supported the work of two students who functioned as research assistants.

Sherlock, Barry G.

2002-11-01

71

ESTIMATION OF MUSCLE ACTIVITY USING PROBABILITY DENSITY FUNCTIONS

........................................................................................... 12 2.1 EMG Data Acquisition....................................................................................................... 14 Chapter 3: EMG Posterior Probability Density Function Estimation Using Bayes' Theorem ....................................................... 19 Chapter 4: EMG and Kinematic Data Processing

72

Kalman's Shrinkage for Wavelet-Based Despeckling of SAR Images

In this paper, a new probability density function (pdf) is proposed to model the statistics of wavelet coefficients, and a simple Kalman's filter is derived from the new pdf using Bayesian estimation theory. Specifically, we decompose the speckled image into wavelet subbands, we apply the Kalman's filter to the high subbands, and reconstruct a despeckled image from the modified detail

BAYESIAN DENOISING; Mario Mastriani; Alberto E. Giraldez

2006-01-01

73

ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY

ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY Daehyun Kim 1 and hardness of dry snow layers for common grain types. These relations have been widely used to estimate), and to estimate the hardness of layers in snowpack evolution models. Since 2000, the database of snow layers has

Jamieson, Bruce

74

ESTIMATING ABUNDANCE AND DENSITY: ADDITIONAL METHODS

............................................................................. 132 Several methods have been developed for population estimation in which the organisms need. These methods were first developed in the 1940s for wildlife and fisheries management to get estimates set of methods are of much more recent development and are based on the principle of resighting

Krebs, Charles J.

75

Density estimation using the trapping web design: A geometric analysis

Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

Link, W.A.; Barker, R.J.

1994-01-01

76

Probability density function (pdf) estimation using isocontours/isosurfaces

1 #12; Probability density function (pdf) estimation using isocontours/isosurfaces Application to Image Registration Application to Image Filtering Circular/spherical density estimation in Euclidean-width/bandwidth/number of components Bias/variance tradeoff: large bandwidth: high bias, low bandwidth: high variance) Sample

Escolano, Francisco

77

Nonparametric estimation of plant density by the distance method

A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

Patil, S.A.; Burnham, K.P.; Kovner, J.L.

1979-01-01

78

An adaptive composite density estimator for k -tree sampling

Density estimators for k-tree distance sampling are sensitive to the amount of extra Poisson variance in distances to the kth tree. To lessen this sensitivity, we propose an adaptive composite estimator (COM). In simulated sampling from 16 test\\u000a populations, a three-component composite density estimator (COM)–with weights determined by a multinomial logistic function\\u000a of four readily available ancillary variables–was identified as

Steen Magnussen; Lutz Fehrman; William J. Platt

79

Morphology driven density distribution estimation for small bodies

NASA Astrophysics Data System (ADS)

We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field environment in close proximity to small body’s surface for accurate trajectory/safe navigation purposes to be used for future missions.

Takahashi, Yu; Scheeres, D. J.

2014-05-01

80

Simulating from the posterior density of Bayesian wavelet regression estimates

Simulating from the posterior density of Bayesian wavelet regression estimates STUART BARBER;k, and updated by the observed data y to form a posterior for each coefficient. We then estimate each coefficient by the median of its posterior distribution and the inverse DWT is used upon the resulting estimates to form

Barber, Stuart

81

Neutral wind estimation from 4-D ionospheric electron density images

We develop a new inversion algorithm for Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE method uses four-dimensional images of global electron density to estimate the field-aligned neutral wind ionospheric driver when direct measurement is not available. We begin with a model of the electron continuity equation that includes production and loss rate estimates, as well as E

S. Datta-Barua; G. S. Bust; G. Crowley; N. Curtis

2009-01-01

82

Density estimation from the sonic log: A case study

In this case study, the authors estimate the bulk densities which would be measured by the density log in a well. They base this estimate on the sonic log, a derived lithology log and velocity-density trend curves. Two published methods based on Gardner et al.`s (1974) relationship and an alternate approach that utilizes an areal trend analysis are evaluated. In comparison with the observed density, the Gardner`s relationship underpredicts the shale density and overpredicts the sand density. A modification of Gardner`s equation (Castagna et al., 1993), which utilizes different coefficients for each lithology, produces a better estimate. However, the results vary from well to well. A local data base within their study area provides an empirical calibration to improve upon the Gardner-type relationships for this area. Approximately 1,000 square miles with 50 wells make up their study area in offshore Louisiana, centered on South Marsh island Block 106. These logs constitute a local data base for determining trends in velocity and density for a two component lithology of sand and shale. The authors identify a linear relationship between the density and logarithm of velocity for both sand and shale. Mixing the sand and shale relationships based on their volume lithologic fractions, they arrive at their density estimate. In comparison to the modified Gardner`s method, a comparable to better estimate of the densities is obtained. Furthermore, the linear relationship allows for easy fine-tuning of the local density prediction. If a portion of the well has a density log, they can calibrate the relationships for the remainder of the well. These results show a remarkable fit to the density curve, with errors of less than 2%. When discrepancies are evident, the predicted curve can be used to edit other logs or to indicate the presence of gas.

DiSiena, J.P.; Hilterman, F.J. [Geophysical Development Corp., Houston, TX (United States)

1994-12-31

83

Nonparametric density estimation in presence of bias and censoring

We consider projection estimator methods for the nonparametric estimation of the density of i.i.d. biased observations with\\u000a a general known bias function w and under right censoring. Adaptive procedures to catch the optimal estimator among a collection by contrast penalization\\u000a are investigated and proved to give efficient estimators with optimal nonparametric rates of convergence. Monte-Carlo experiments\\u000a complete the study and

E. Brunel; F. Comte; A. Guilloux

2009-01-01

84

Baseline wander correction in pulse waveforms using wavelet-based cascaded adaptive filter.

Pulse diagnosis is a convenient, inexpensive, painless, and non-invasive diagnosis method. Quantifying pulse diagnosis is to acquire and record pulse waveforms by a set of sensor firstly, and then analyze these pulse waveforms. However, respiration and artifact motion during pulse waveform acquisition can introduce baseline wander. It is necessary, therefore, to remove the pulse waveform's baseline wander in order to perform accurate pulse waveform analysis. This paper presents a wavelet-based cascaded adaptive filter (CAF) to remove the baseline wander of pulse waveform. To evaluate the level of baseline wander, we introduce a criterion: energy ratio (ER) of pulse waveform to its baseline wander. If the ER is more than a given threshold, the baseline wander can be removed only by cubic spline estimation; otherwise it must be filtered by, in sequence, discrete Meyer wavelet filter and the cubic spline estimation. Compared with traditional methods such as cubic spline estimation, morphology filter and Linear-phase finite impulse response (FIR) least-squares-error digital filter, the experimental results on 50 simulated and 500 real pulse signals demonstrate the power of CAF filter both in removing baseline wander and in preserving the diagnostic information of pulse waveforms. This CAF filter also can be used to remove the baseline wander of other physiological signals, such as ECG and so on. PMID:16930579

Xu, Lisheng; Zhang, David; Wang, Kuanquan; Li, Naimin; Wang, Xiaoyun

2007-05-01

85

Maximum likelihood estimation of a multivariate log-concave density

. Density estimation is often one stage in a more complicated statistical procedure. With this in mind, we show how the estimator may be used for plug-in estimation of statistical functionals. A second important extension is the use of log-concave components... of Section 1.2.1 some restrictions are necessary to ensure that the density does not get too “spiky”. Shape-constrained maximum likelihood inference was first introduced by Grenander (1956) in the context of estimating mortality under the assumption...

Cule, Madeleine

2010-01-12

86

Wavelet-based face verification for constrained platforms

NASA Astrophysics Data System (ADS)

Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.

Sellahewa, Harin; Jassim, Sabah A.

2005-03-01

87

Wavelet-based laser-induced ultrasonic inspection in pipes

NASA Astrophysics Data System (ADS)

The feasibility of detecting localized defects in tubing using Wavelet based laser-induced ultrasonic-guided waves as an inspection method is examined. Ultrasonic guided waves initiated and propagating in hollow cylinders (pipes and/or tubes) are studied as an alternative, robust nondestructive in situ inspection method. Contrary to other traditional methods for pipe inspection, in which contact transducers (electromagnetic, piezoelectric) and/or coupling media (submersion liquids) are used, this method is characterized by its non-contact nature. This characteristic is particularly important in applications involving Nondestructive Evaluation (NDE) of materials because the signal being detected corresponds only to the induced wave. Cylindrical guided waves are generated using a Q-switched Nd:YAG laser and a Fiber Tip Interferometry (FTI) system is used to acquire the waves. Guided wave experimental techniques are developed for the measurement of phase velocities to determine elastic properties of the material and the location and geometry of flaws including inclusions, voids, and cracks in hollow cylinders. As compared to the traditional bulk wave methods, the use of guided waves offers several important potential advantages. Some of which includes better inspection efficiency, the applicability to in-situ tube inspection, and fewer evaluation fluctuations with increased reliability.

Baltazar-López, Martín E.; Suh, Steve; Chona, Ravinder; Burger, Christian P.

2006-02-01

88

Experimental and numerical evaluation of wavelet based damage detection methodologies

NASA Astrophysics Data System (ADS)

This article presents an evaluation of the capabilities of wavelet-based methodologies for damage identification in civil structures. Two different approaches were evaluated: (1) analysis of the structure frequencies evolution by means of the continuous wavelet transform and (2) analysis of the singularities generated in the high frequency response of the structure through the detail functions obtained via fast wavelet transform. The methodologies were evaluated using experimental and numerical simulated data. It was found that the selection of appropriate wavelet parameters is critical for a successful analysis of the signal. Wavelet parameters should be selected based on the expected frequency content of the signal and desired time and frequency resolutions. Identifications of frequency shifts via ridge extraction of the wavelet map were successful in most of the experimental and numerical scenarios investigated. Moreover, the frequency shift can be inferred most of the time but the exact time at which it occurs is not evident. However, this information can be retrieved from the spike location from the Fast Wavelet Transform analysis. Therefore, it is recommended to perform both type of analysis and look at the results together.

Quiñones, Mireya M.; Montejo, Luis A.; Jang, Shinae

2015-03-01

89

Wavelet-based acoustic emission detection method with adaptive thresholding

NASA Astrophysics Data System (ADS)

Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

2000-06-01

90

A neural and morphological method for wavelet-based image compression

Image compression using the wavelet transform has several advantages over other transform methods. However, wavelet-based compression methods require not only the encoding of the significant coefficients, but also of their positions within the image. The paper presents a wavelet-based image compression method where the significance map is pre-processed using mathematical morphology techniques to create clusters of significant coefficients. It is

W. T. de Almeida Filho; A. D. Doria Neto; A. M. Brito Junior

2002-01-01

91

A novel wavelet-based finite element method for the analysis of rotor-bearing systems

The rotor dynamic theory, combined with finite element method, has been widely used over the last three decades in order to calculate the dynamic parameters in rotor-bearing systems. Since the wavelet-based elements offer multi-scale models, particularly in modeling complex systems, the wavelet-based rotating shaft elements are constructed to model rotor-bearing systems. The effects of translational and rotatory inertia, the gyroscopic

Jiawei Xiang; Dongdi Chen; Xuefeng Chen; Zhengjia He

2009-01-01

92

Unbiased estimators of wildlife population densities using aural information

UNBIASED ESTIMATORS OF WILDLIFE POPULATION DENSITIES USING AURAL INFORMATION A Thesis by ERIC NEWTON DURLAND Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree MASTER OF SCIENCE... May 1969 Ma]or Sub]ect: Statistics UNBIASED ESTIMATORS OF WILDLIFE POPULATION DENSITIES USING AURAL INFORMATION A Thesis by ERIC NEWTON DURLAND Approved as to sty1e and content by: (Chairm n of gommittee) Head of Departmen (Member) (Memb r...

Durland, Eric Newton

1969-01-01

93

Evaluation of wolf density estimation from radiotelemetry data

Density estimation of wolves (Canis lupus) requires a count of individuals and an estimate of the area those individuals inhabit. With radiomarked wolves, the count is straightforward but estimation of the area is more difficult and often given inadequate attention. The population area, based on the mosaic of pack territories, is influenced by sampling intensity similar to the estimation of individual home ranges. If sampling intensity is low, population area will be underestimated and wolf density will be inflated. Using data from studies in Denali National Park and Preserve, Alaska, we investigated these relationships using Monte Carlo simulation to evaluate effects of radiolocation effort and number of marked packs on density estimation. As the number of adjoining pack home ranges increased, fewer relocations were necessary to define a given percentage of population area. We present recommendations for monitoring wolves via radiotelemetry.

Burch, J.W.; Adams, L.G.; Follmann, E.H.; Rexstad, E.A.

2005-01-01

94

Incorporating prior knowledge into nonparametric conditional density estimation

In this paper, the problem of sparse nonpara- metric conditional density estimation based on samples and prior knowledge is addressed. The prior knowledge may be restricted to parts of the state space and given as generative models in form of mean-function constraints or as probabilistic models in the form of Gaussian mixture densities. The key idea is the introduction of

Peter Krauthausen; Masoud Roschani; Uwe D. Hanebeck

2011-01-01

95

Asymptotic Equivalence of Density Estimation and Gaussian White Noise

Asymptotic Equivalence of Density Estimation and Gaussian White Noise Michael Nussbaum Weierstrass Institute, Berlin September 1995 Abstract Signal recovery in Gaussian white noise with variance tending with density f is globally asymptotically equivalent to a white noise experiment with drift f1/2 and variance 1

Nussbaum, Michael

96

MODEL-BASED CLUSTERING, DISCRIMINANT ANALYSIS, AND DENSITY ESTIMATION

MODEL-BASED CLUSTERING, DISCRIMINANT ANALYSIS, AND DENSITY ESTIMATION by Chris Fraley Adrian E Seattle, Washington 98195 USA #12;#12;Model-Based Clustering, Discriminant Analysis, and Density.stat.washington.edu/fraley www.stat.washington.edu/raftery #12;Abstract Cluster analysis is the automated search for groups

Washington at Seattle, University of

97

Improving 3D Wavelet-Based Compression of Hyperspectral Images

NASA Technical Reports Server (NTRS)

Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

2009-01-01

98

A wavelet-based approach to face verification/recognition

NASA Astrophysics Data System (ADS)

Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

Jassim, Sabah; Sellahewa, Harin

2005-10-01

99

Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

NASA Astrophysics Data System (ADS)

The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

2014-12-01

100

Atmospheric Density Corrections Estimated from Fitted Drag Coefficients

NASA Astrophysics Data System (ADS)

Fitted drag coefficients estimated using GEODYN, the NASA Goddard Space Flight Center Precision Orbit Determination and Geodetic Parameter Estimation Program, are used to create density corrections. The drag coefficients were estimated for Stella, Starlette and GFZ using satellite laser ranging (SLR) measurements; and for GEOSAT Follow-On (GFO) using SLR, Doppler, and altimeter crossover measurements. The data analyzed covers years ranging from 2000 to 2004 for Stella and Starlette, 2000 to 2002 and 2005 for GFO, and 1995 to 1997 for GFZ. The drag coefficient was estimated every eight hours. The drag coefficients over the course of a year show a consistent variation about the theoretical and yearly average values that primarily represents a semi-annual/seasonal error in the atmospheric density models used. The atmospheric density models examined were NRLMSISE-00 and MSIS-86. The annual structure of the major variations was consistent among all the satellites for a given year and consistent among all the years examined. The fitted drag coefficients can be converted into density corrections every eight hours along the orbit of the satellites. In addition, drag coefficients estimated more frequently can provide a higher frequency of density correction.

McLaughlin, C. A.; Lechtenberg, T. F.; Mance, S. R.; Mehta, P.

2010-12-01

101

Non-local crime density estimation incorporating housing information.

Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H(1) Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

Woodworth, J T; Mohler, G O; Bertozzi, A L; Brantingham, P J

2014-11-13

102

Non-local crime density estimation incorporating housing information

Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

2014-01-01

103

Kernel density estimation of a multidimensional efficiency profile

NASA Astrophysics Data System (ADS)

Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the ?0b ? D0p?? decay.

Poluektov, A.

2015-02-01

104

Kernel density estimation of a multidimensional efficiency profile

Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the $\\Lambda_b^0\\to D^0p\\pi$ decay.

Anton Poluektov

2014-11-20

105

Density estimation using KNN and a potential model

NASA Astrophysics Data System (ADS)

Density-based clustering methods are usually more adaptive than other classical methods in that they can identify clusters of various shapes and can handle noisy data. A novel density estimation method is proposed using both the knearest neighbor (KNN) graph and a hypothetical potential field of the data points to capture the local and global data distribution information respectively. An initial density score computed using KNN is used as the mass of the data point in computing the potential values. Then the computed potential is used as the new density estimation, from which the final clustering result is derived. All the parameters used in the proposed method are determined from the input data automatically. The new clustering method is evaluated by comparing with K-means++, DBSCAN, and CSPV. The experimental results show that the proposed method can determine the number of clusters automatically while producing competitive clustering results compared to the other three methods.

Lu, Yonggang; Qiao, Jiangang; Liao, Li; Yang, Wuyang

2013-10-01

106

Quantiles, parametric-select density estimation, and bi-information parameter estimators

NASA Technical Reports Server (NTRS)

A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.

Parzen, E.

1982-01-01

107

Density-ratio robustness in dynamic state estimation

NASA Astrophysics Data System (ADS)

The filtering problem is addressed by taking into account imprecision in the knowledge about the probabilistic relationships involved. Imprecision is modelled in this paper by a particular closed convex set of probabilities that is known with the name of density ratio class or constant odds-ratio (COR) model. The contributions of this paper are the following. First, we shall define an optimality criterion based on the squared-loss function for the estimates derived from a general closed convex set of distributions. Second, after revising the properties of the density ratio class in the context of parametric estimation, we shall extend these properties to state estimation accounting for system dynamics. Furthermore, for the case in which the nominal density of the COR model is a multivariate Gaussian, we shall derive closed-form solutions for the set of optimal estimates and for the credible region. Third, we discuss how to perform Monte Carlo integrations to compute lower and upper expectations from a COR set of densities. Then we shall derive a procedure that, employing Monte Carlo sampling techniques, allows us to propagate in time both the lower and upper state expectation functionals and, thus, to derive an efficient solution of the filtering problem. Finally, we empirically compare the proposed estimator with the Kalman filter. This shows that our solution is more robust to the presence of modelling errors in the system and that, hence, appears to be a more realistic approach than the Kalman filter in such a case.

Benavoli, Alessio; Zaffalon, Marco

2013-05-01

108

Nonparametric probability density estimation by optimization theoretic techniques

NASA Technical Reports Server (NTRS)

Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

Scott, D. W.

1976-01-01

109

Estimating the spectrum of a density matrix with LOCC

The problem of estimating the spectrum of a density matrix is considered. Other problems, such as bipartite pure state entanglement, can be reduced to spectrum estimation. A local operations and classical communication (LOCC) measurement strategy is shown which is asymptotically optimal. This means that, for a very large number of copies, it becomes unnecessary to perform collective measurements which should be more difficult to implement in practice.

Manuel A. Ballester

2006-02-01

110

An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments

In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro

2013-01-01

111

Density estimation in tiger populations: combining information for strong inference

A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

2012-01-01

112

Improved Fast Gauss Transform and Efficient Kernel Density Estimation

Abstract Evaluating sums of multivariate Gaussians is a common computational task in computer vision and pattern recogni - tion, including in the general and powerful kernel density estimation technique The quadratic computational com - plexity of the summation is a significant barrier to the scal - ability of this algorithm to practical applications The fast Gauss transform (FGT) has successfully

Changjiang Yang; Ramani Duraiswami; Nail A. Gumerov; Larry S. Davis

2003-01-01

113

Contributed Paper Estimating the Density of Honeybee Colonies across

Contributed Paper Estimating the Density of Honeybee Colonies across Their Natural Range to Fill, University of Pretoria, Pretoria 0002, South Africa §Honeybee Research Section, ARC-Plant Protection Research, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because

Paxton, Robert

114

Adaptive density estimation for directional data using needlets

This paper is concerned with density estimation of directional data on the sphere. We introduce a procedure based on thresholding on a new type of spherical wavelets called {\\it needlets}. We establish a minimax result and prove its optimality. We are motivated by astrophysical applications, in particular in connection with the analysis of ultra high energy cosmic rays.

P. Baldi; G. Kerkyacharian; D. Marinucci; D. Picard

2008-07-31

115

Extracting galactic structure parameters from multivariated density estimation

NASA Technical Reports Server (NTRS)

Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

1992-01-01

116

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

117

The Effect of Lidar Point Density on LAI Estimation

NASA Astrophysics Data System (ADS)

Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful in forest inventory applications and tree health evaluations. However, such high-density data is often not available over large areas. In this study we progressively downsampled the high-density discrete lidar data and evaluated the effect on LAI estimation. The AccuPAR data was used as validation and results were compared to existing LAI metrics. This will enable us to determine the minimum point density required for airborne lidar LAI retrieval. Preliminary results show that the data may be substantially thinned to estimate site-level LAI. More detailed results will be presented at the conference.

Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

2013-12-01

118

Transformation-based density estimation for weighted distributions

In this paper we consider the estimation of a density f on the basis of random sample from a weighted distribution G with density g given by g(x) = w(x)f(x)\\/ Ã?Âµw, where w(u)>0 for all u and Ã?Âµw = Ã¢Â?Â« w(u)f(u)du < Ã¢Â?Â?. A special case of this situation is that of length-biased sampling, where w(x) = x. In this

Hammou El Barmi; Jeffrey S. Simonoff

1999-01-01

119

Estimating electric current densities in solar active regions

Electric currents in solar active regions are thought to provide the energy released via magnetic reconnection in solar flares. Vertical electric current densities $J_z$ at the photosphere may be estimated from vector magnetogram data, subject to substantial uncertainties. The values provide boundary conditions for nonlinear force- free modelling of active region magnetic fields. A method is presented for estimating values of $J_z$ taking into account uncertainties in vector magnetogram field values, and minimizing $J_z^2$ across the active region. The method is demonstrated using the boundary values of the field for a force-free twisted bipole, with the addition of noise at randomly chosen locations.

Wheatland, M S

2015-01-01

120

Estimating Electric Current Densities in Solar Active Regions

NASA Astrophysics Data System (ADS)

Electric currents in solar active regions are thought to provide the energy released via magnetic reconnection in solar flares. Vertical electric current densities J z at the photosphere may be estimated from vector magnetogram data, subject to substantial uncertainties. The values provide boundary conditions for nonlinear force-free modelling of active region magnetic fields. A method is presented for estimating values of J z taking into account uncertainties in vector magnetogram field values, and minimising Jz2 across the active region. The method is demonstrated using the boundary values of the field for a force-free twisted bipole, with the addition of noise at randomly chosen locations.

Wheatland, M. S.

2015-04-01

121

WAVELET-BASED FOVEATED IMAGE QUALITY MEASUREMENT FOR REGION OF INTEREST IMAGE CODING

resolution images. These metrics are not appropriate for the assessment of ROI coded images, where space-variant is that the human visual system (HVS) is highly space-variant in sampling, coding, processing and understandingWAVELET-BASED FOVEATED IMAGE QUALITY MEASUREMENT FOR REGION OF INTEREST IMAGE CODING Zhou Wang1

Wang, Zhou

122

Adapted Convex Optimization Algorithm for Wavelet-Based Dynamic PET Reconstruction

1 Adapted Convex Optimization Algorithm for Wavelet-Based Dynamic PET Reconstruction Nelly Abstract--This work deals with Dynamic Positron Emission Tomography (PET) data reconstruction, considering. The effectiveness of this approach is shown with simulated dynamic PET data. Comparative results are also provided

Paris-Sud XI, UniversitÃ© de

123

Wavelet-Based Nonlinear Multiscale Decomposition Model for Electricity Load Forecasting

that is not fully utilized. On the other hand, a forecast that is too low may lead to some revenue loss from sales1 Wavelet-Based Nonlinear Multiscale Decomposition Model for Electricity Load Forecasting D Company (NEMMCO). KEYWORDS: Wavelet transform, load forecast, scale, resolution, time series

Murtagh, Fionn

124

Wavelet-based feature extraction using probabilistic finite state automata for pattern Probabilistic finite state automata a b s t r a c t Real-time data-driven pattern classification requires (e.g., probabilistic finite state automata (PFSA)) capture the relevant information, embedded

Ray, Asok

125

Multiresolution analysis on zero-dimensional Abelian groups and wavelets bases

For a locally compact zero-dimensional group (G,+{sup .}), we build a multiresolution analysis and put forward an algorithm for constructing orthogonal wavelet bases. A special case is indicated when a wavelet basis is generated from a single function through contractions, translations and exponentiations. Bibliography: 19 titles.

Lukomskii, Sergei F [Saratov State University, Saratov (Russian Federation)

2010-06-29

126

During genome evolution, the two strands of the DNA double helix are not subjected to the same mutation patterns. This mutation bias is considered as a by-product of replicative and transcriptional activities. In this paper, we develop a wavelet-based methodology to analyze the DNA strand asymmetry profiles with the specific goal to extract the contributions associated with replication and transcription

Antoine Baker; Samuel Nicolay; Lamia Zaghloul; Yves d'Aubenton-Carafa; Claude Thermes; Benjamin Audit; Alain Arneodo

2010-01-01

127

A WAVELET-BASED PATTERN RECOGNITION ALGORITHM TO CLASSIFY POSTURAL TRANSITIONS IN HUMANS

A WAVELET-BASED PATTERN RECOGNITION ALGORITHM TO CLASSIFY POSTURAL TRANSITIONS IN HUMANS Anthony and workers in institutions equipped to care of elderly people. To prevent overpopulation problems, researcher to detect and reproduce move- ments of a part of the human body (a limb for instance) with uses in virtual

Boyer, Edmond

128

A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images

A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images Hammad of samples is an im- portant problem in the domain of histological image classification. This issue is inherent to the field due to the high complexity of histology im- age data. A technique that provides good

Rajpoot, Nasir

129

Evaluation of a new wavelet-based compression algorithm for synthetic aperture radar images

In this paper we will discuss the performance of a new wavelet based embedded compression algorithm on synthetic aperture radar (SAR) image data. This new algorithm uses index coding on the indices of the discrete wavelet transform of the image data and provides an embedded code to successively approximate it. Results on compressing still images, medical images as well as

Jun Tian; Haitao Guo; Raymond O. Wells; C. Sidney Burrus; Jan E. Odegard

1996-01-01

130

pancreatic cancer experiment. Blood serum was taken from 139 pancreatic cancer patients and 117 controls. Anderson Cancer Center, Houston, TX Abstract Wavelet-based Functional Mixed Models (WFMM) is a new Bayesian either A375P human melanoma or PC3MM2 prostate cancer cell lines were implanted in either the brain

Morris, Jeffrey S.

131

Some Bayesian statistical techniques useful in estimating frequency and density

This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

Johnson, D.H.

1977-01-01

132

Estimating black bear density using DNA data from hair snares

DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

Gardner, B.; Royle, J.A.; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

2010-01-01

133

Volume estimation of multi-density nodules with thoracic CT

NASA Astrophysics Data System (ADS)

The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.

Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas

2014-03-01

134

Structural Reliability Using Probability Density Estimation Methods Within NESSUS

NASA Technical Reports Server (NTRS)

A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

2003-01-01

135

Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

NASA Technical Reports Server (NTRS)

The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

Mahmoud, Saad; Hi, Jianjun

2012-01-01

136

A projection and density estimation method for knowledge discovery.

A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

Stanski, Adam; Hellwich, Olaf

2012-01-01

137

When the true mixing density is known to be continuous, the maximum likelihood estimate of the mixing density does not provide a satisfying answer due to its degeneracy. Estimation of mixing densities is a well-known ill-posed indirect problem. In this article, we propose to esti- mate the mixing density by maximizing a penalized likelihood and call the resulting estimate the

Lei Liu; Michael Levine; Yu Zhu

2009-01-01

138

Effect of Random Clustering on Surface Damage Density Estimates

Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

Matthews, M J; Feit, M D

2007-10-29

139

Estimation of probability densities using scale-free field theories

NASA Astrophysics Data System (ADS)

The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

Kinney, Justin B.

2014-07-01

140

BACKGROUND: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error

Neil A White; Richard M Engeman; Robert T Sugihara; Heather W Krupa

2008-01-01

141

Estimation of Volumetric Breast Density from Digital Mammograms

NASA Astrophysics Data System (ADS)

Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.

Alonzo-Proulx, Olivier

142

The Gaussian mixture models (GMMs) is a flexible and powerful density clustering tool. However, the application of it to medical\\u000a image segmentation faces some difficulties. First, estimation of the number of components is still an open question. Second,\\u000a the speed of it for large medical image is slow. Moreover, GMMs has the problem of noise sensitivity. In this paper, the

Cong-Hua Xie; Yu-Qing Song; Jian-Mei Chen

2011-01-01

143

Estimating Kendall's tau for bivariate interval censored data with a smooth estimate of the density

Measures of association for bivariate interval censored data have not yet been studied extensively. Betensky and Finkelstein(3) proposed to calculate Kendall's coecient of concordance using a multiple imputation technique. However, this method is quite computer intensive. Our approach is based on two steps. First, we fit a bivariate smooth estimate of the density of log-event times on a fixed grid.

Emmanuel Lesare; Kris Bogaerts

144

Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

2008-01-01

145

Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

NASA Astrophysics Data System (ADS)

T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbie?, A.; Opolski, G.; Maniewski, R.

2011-01-01

146

Wavelet-based efficient simulation of electromagnetic transients in a lightning protection system

In this paper, a wavelet-based efficient simulation of electromagnetic transients in a lightning protection systems (LPS) is presented. The analysis of electromagnetic transients is carried out by employing the thin-wire electric field integral equation in frequency domain. In order to easily handle the boundary conditions of the integral equation, semiorthogonal compactly supported spline wavelets, constructed for the bounded interval [0,1],

Guido Ala; Maria L. Di Silvestre; Elisa Francomano; Adele Tortorici

2003-01-01

147

Wavelet-based Contourlet Coding Using an SPIHT-like Algorithm

In this paper, we propose a new non-linear image approximation method that decomposes images both radially and angularly. Our approximation is based on two stages of filter banks that are non-redundant and perfect reconstruction and therefore lead to an overall non-redundant and perfect reconstruction transform. We show that this transform, which we call it the Wavelet-Based Contourlet Transform (WBCT), is

Ramin Eslami; Hayder Radha

148

Wavelet-based image restoration for compact X-ray microscopy

Summary Compact water-window X-ray microscopy with short expo- sure times will always be limited on photons owing to sources of limited power in combination with low-efficency X-ray optics. Thus, it is important to investigate methods for improv- ing the signal-to-noise ratio in the images. We show that a wavelet-based denoising procedure significantly improves the quality and contrast in compact X-ray

H. Stollberg; J. Boutet De Monvel; A. Holmberg; H. M. Hertz

2003-01-01

149

Wavelet-based fuzzy reasoning approach to power-quality disturbance recognition

This paper proposes a wavelet-based extended fuzzy reasoning approach to power-quality disturbance recognition and identification. To extract power-quality disturbance features, the energy distribution of the wavelet part at each decomposition level is introduced and its calculation mathematically established. Based on these features, rule bases are generated for extended fuzzy reasoning. The power-quality disturbance features are finally mapped into a real

T. X. Zhu; S. K. Tso; K. L. Lo

2004-01-01

150

Digital implementation of a wavelet-based event detector for cardiac pacemakers

This paper presents a digital hardware implementation of a novel wavelet-based event detector suitable for the next generation of cardiac pacemakers. Significant power savings are achieved by introducing a second operation mode that shuts down 2\\/3 of the hardware for long time periods when the pacemaker patient is not exposed to noise, while not degrading performance. Due to a 0.13-?m

Joachim Neves Rodrigues; Thomas Olsson; Leif Sörnmo; Viktor Öwall

2005-01-01

151

VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images

This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking

Damon M. Chandler; Sheila S. Hemami

2007-01-01

152

Density estimation on multivariate censored data with optional Pólya tree

Analyzing the failure times of multiple events is of interest in many fields. Estimating the joint distribution of the failure times in a non-parametric way is not straightforward because some failure times are often right-censored and only known to be greater than observed follow-up times. Although it has been studied, there is no universally optimal solution for this problem. It is still challenging and important to provide alternatives that may be more suitable than existing ones in specific settings. Related problems of the existing methods are not only limited to infeasible computations, but also include the lack of optimality and possible non-monotonicity of the estimated survival function. In this paper, we proposed a non-parametric Bayesian approach for directly estimating the density function of multivariate survival times, where the prior is constructed based on the optional Pólya tree. We investigated several theoretical aspects of the procedure and derived an efficient iterative algorithm for implementing the Bayesian procedure. The empirical performance of the method was examined via extensive simulation studies. Finally, we presented a detailed analysis using the proposed method on the relationship among organ recovery times in severely injured patients. From the analysis, we suggested interesting medical information that can be further pursued in clinics. PMID:23902636

Seok, Junhee; Tian, Lu; Wong, Wing H.

2014-01-01

153

Multispectral Remote Sensing Image Classification Using Wavelet Based Features

Multispectral remotely sensed images composed information over a large range of variation on frequencies (information) and\\u000a these frequencies change over different regions (irregular or frequency variant behavior of the signal) which need to be estimated\\u000a properly for an improved classification [1, 2, 3]. Multispectral remote sensing (RS) image data are basically complex in nature,\\u000a which have both spectral features with

Saroj K. Meher; Bhavan Uma Shankar; Ashish Ghosh

154

NASA Astrophysics Data System (ADS)

In this work, we have applied a Wavelet Based Fractal Analysis (WBFA) to well logs and seismic data at the Teapot Dome Field, Natrona Country, Wyoming-USA, trying to characterize a reservoir using fractal parameters, as intercept (b), slope (m) and fractal dimension (D), and to correlate them with the sedimentation processes and/or the lithological characteristics of the area. The WBFA was first applied to the available logs (Gamma Ray, Spontaneous Potential, Density, Neutron Porosity and Deep Resistivity) from 20 wells located at sectors 27, 28, 33 and 34 of the 3D seismic of the Teapot Dome field. Also the WBFA was applied to the calculated curve of water saturation (Sw). At a second step, the method was used to analyze a set of seismic traces close to the studied wells, extracted from the 3D seismic data. Maps of the fractal parameters were obtained. A spectral analysis of the seismic data was also performed in order to identify seismic facies and to establish a possible correlation with the fractal results. The WBFA results obtained for the wells logs indicate a correlation between fractal parameters and the lithological content in the studied interval (i.e. top-base of the Frontier Formation). Particularly, for the Gamma Ray logs the fractal dimension D can be correlated with the sand-shale content: values of D lower than 0.9 are observed for those wells with more sand content (sandy wells); values of D between 0.9 and 1.1 correspond to wells where the sand packs present numerous inter-bedded shale layers (sandy-shale wells); finally, wells with more shale content (shaly wells) have D values greater than 1.1. The analysis of the seismic traces allowed the discrimination of shaly from sandy zones. The D map generated for the seismic traces indicates that this value can be associated with the shale content in the area. The iso-frequency maps obtained from the seismic spectral analysis show trends associated to the lithology of the field. These trends are similar to those observed in the maps of the fractal parameters, indicating that both analyses respond to lithological and/or sedimentation features in the area.

García, Alejandro; Aldana, Milagrosa; Cabrera, Ana

2013-04-01

155

Comparative study of different wavelet based neural network models for rainfall-runoff modeling

NASA Astrophysics Data System (ADS)

The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

2014-07-01

156

WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering

Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. Availability: WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. Contact: statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa PMID:22328784

Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin

2012-01-01

157

Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

NASA Astrophysics Data System (ADS)

Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

Rastigejev, Y.

2011-12-01

158

Wavelet-Based Real-Time Diagnosis of Complex Systems

NASA Technical Reports Server (NTRS)

A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

Gulati, Sandeep; Mackey, Ryan

2003-01-01

159

An Adaptive Background Subtraction Method Based on Kernel Density Estimation

In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

Lee, Jeisung; Park, Mignon

2012-01-01

160

Estimating Foreign-Object-Debris Density from Photogrammetry Data

NASA Technical Reports Server (NTRS)

Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

Long, Jason; Metzger, Philip; Lane, John

2013-01-01

161

Wavelet-based stereo images reconstruction using depth images

NASA Astrophysics Data System (ADS)

It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance measures.

Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

2007-09-01

162

Fluorescence diffuse optical tomography: a wavelet-based model reduction

NASA Astrophysics Data System (ADS)

Fluorescence diffuse optical tomography is becoming a powerful tool for the investigation of molecular events in small animal studies for new therapeutics developments. Here, the stress is put on the mathematical problem of the tomography, that can be formulated in terms of an estimation of physical parameters appearing as a set of Partial Differential Equations (PDEs). The Finite Element Method has been chosen here to resolve the diffusion equation because it has no restriction considering the geometry or the homogeneity of the system. It is nonetheless well-known to be time and memory consuming, mainly because of the large dimensions of the involved matrices. Our principal objective is to reduce the model in order to speed up the model computation. For that, a new method based on a multiresolution technique is chosen. All the matrices appearing in the discretized version of the PDEs are projected onto an orthonormal wavelet basis, and reduced according to the multiresolution method. With the first order resolution, this compression leads to the reduction of a factor 2x2 of the initial dimension, the inversion of the matrices is approximately 4 times faster. A validation study on a phantom was conducted to evaluate the feasibility of this reduction method.

Frassati, Anne; DaSilva, Anabela; Dinten, Jean-Marc; Georges, Didier

2007-07-01

163

Atmospheric turbulence mitigation using complex wavelet-based fusion.

Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359

Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R

2013-06-01

164

Wavelet-based coherence measures of global seismic noise properties

NASA Astrophysics Data System (ADS)

The coherent behavior of four parameters characterizing the global field of low-frequency (periods from 2 to 500 min) seismic noise is studied. These parameters include generalized Hurst exponent, multifractal singularity spectrum support width, the normalized entropy of variance, and kurtosis. The analysis is based on the data from 229 broadband stations of GSN, GEOSCOPE, and GEOFON networks for a 17-year period from the beginning of 1997 to the end of 2013. The entire set of stations is subdivided into eight groups, which, taken together, provide full coverage of the Earth. The daily median values of the studied noise parameters are calculated in each group. This procedure yields four 8-dimensional time series with a time step of 1 day with a length of 6209 samples in each scalar component. For each of the four 8-dimensional time series, a multiple correlation measure is estimated, which is based on computing robust canonical correlations for the Haar wavelet coefficients at the first detail level within a moving time window of the length 365 days. These correlation measures for each noise property demonstrate essential increasing starting from 2007 to 2008 which was continued till the end of 2013. Taking into account a well-known phenomenon of noise correlation increasing before catastrophes, this increasing of seismic noise synchronization is interpreted as indicators of the strongest (magnitudes not less than 8.5) earthquakes activation which is observed starting from the Sumatra mega-earthquake of 26 Dec 2004. This synchronization continues growing up to the end of the studied period (2013), which can be interpreted as a probable precursor of the further increase in the intensity of the strongest earthquakes all over the world.

Lyubushin, A. A.

2015-04-01

165

A novel ultrasound methodology for estimating spine mineral density.

We investigated the possible clinical feasibility and accuracy of an innovative ultrasound (US) method for diagnosis of osteoporosis of the spine. A total of 342 female patients (aged 51-60 y) underwent spinal dual X-ray absorptiometry and abdominal echographic scanning of the lumbar spine. Recruited patients were subdivided into a reference database used for US spectral model construction and a study population for repeatability and accuracy evaluation. US images and radiofrequency signals were analyzed via a new fully automatic algorithm that performed a series of spectral and statistical analyses, providing a novel diagnostic parameter called the osteoporosis score (O.S.). If dual X-ray absorptiometry is assumed to be the gold standard reference, the accuracy of O.S.-based diagnoses was 91.1%, with k = 0.859 (p < 0.0001). Significant correlations were also found between O.S.-estimated bone mineral densities and corresponding dual X-ray absorptiometry values, with r(2) values up to 0.73 and a root mean square error of 6.3%-9.3%. The results obtained suggest that the proposed method has the potential for future routine application in US-based diagnosis of osteoporosis. PMID:25438845

Conversano, Francesco; Franchini, Roberto; Greco, Antonio; Soloperto, Giulia; Chiriacò, Fernanda; Casciaro, Ernesto; Aventaggiato, Matteo; Renna, Maria Daniela; Pisani, Paola; Di Paola, Marco; Grimaldi, Antonella; Quarta, Laura; Quarta, Eugenio; Muratore, Maurizio; Laugier, Pascal; Casciaro, Sergio

2015-01-01

166

Estimation of density of mongooses with capture-recapture and distance sampling

We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

Corn, J.L.; Conroy, M.J.

1998-01-01

167

/ Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% CI = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.KEY WORDS: Bias; Density; Distance sampling; Gray squirrel; Line transect; Sciurus carolinensis. PMID:9336490

Hein

1997-11-01

168

On the analysis of wavelet-based approaches for print mottle artifacts

NASA Astrophysics Data System (ADS)

Print mottle is one of several attributes described in ISO/IEC DTS 24790, a draft technical specification for the measurement of image quality for monochrome printed output. It defines mottle as aperiodic fluctuations of lightness less than about 0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC 13660. In a previous publication, we introduced a modification to the ISO/IEC 13660 mottle measurement algorithm that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations including those introduced by print grain artifacts. This modification has improved the algorithm's correlation with the subjective evaluation of experts who rated the severity of printed mottle artifacts. Seeking to improve upon the mottle algorithm in ISO/IEC 13660, the ISO 24790 committee evaluated several mottle metrics. This led to the selection of the above wavelet-based approach as the top candidate algorithm for inclusion in a future ISO/IEC standard. Recent experimental results from the ISO committee showed higher correlation between the wavelet-based approach and the subjective evaluation conducted by the ISO committee members based upon 25 samples covering a variety of printed mottle artifacts. In addition, we introduce an alternative approach for measuring mottle defects based on spatial frequency analysis of wavelet- filtered images. Our goal is to establish a link between the spatial-based mottle (ISO/IEC DTS 24790) approach and its equivalent frequency-based one in light of Parseval's theorem. Our experimental results showed a high correlation between the spatial and frequency based approaches.

Eid, Ahmed H.; Cooper, Brian E.

2014-01-01

169

Iterated denoising and fusion to improve the image quality of wavelet-based coding

NASA Astrophysics Data System (ADS)

An iterated denoising and fusion method is presented to improve the image quality of wavelet-based coding. Firstly, iterated image denoising is used to reduce ringing and staircase noise along curving edges and improve edge regularity. Then, we adopt wavelet fusion method to enhance image edges, protect non-edge regions and decrease blurring artifacts during the process of denoising. Experimental results have shown that the proposed scheme is capable of improving both the subjective and the objective performance of wavelet decoders, such as JPEG2000 and SPIHT.

Song, Beibei

2011-06-01

170

Optimal block boundary pre/postfiltering for wavelet-based image and video compression.

This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467

Liang, Jie; Tu, Chengjie; Tran, Trac D

2005-12-01

171

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

172

Serial identification of EEG patterns using adaptive wavelet-based analysis

NASA Astrophysics Data System (ADS)

A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

2013-10-01

173

Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors

Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors Fabienne Comte, Tabea of nonparametric density estimation in the pile-up model. Adaptive nonparametric estimators are proposed for the pile-up model in its simple form as well as in the case of additional measurement errors. Furthermore

Paris-Sud XI, UniversitÃ© de

174

1 Probability Density Estimation using Isocontours and Isosurfaces: Application to Information]. A required component of all information theoretic techniques in image registration is a good estimator to noisy, sparse density estimates (variance) whereas too large a bin width introduces oversmoothing (bias

Banerjee, Arunava

175

A sampling unit for estimating gall densities of Paradiplosis tumifex (Diptera: Cecidomyiidae) in

for evaluating densities of balsam gall midge, Paradiplosis tumifex Gagne´ (Diptera: Cecidomyiidae), and its gall midge, Paradiplosis tumifex Gagne´ (Diptera: Cecidomyiidae), is a major Christmas tree pestA sampling unit for estimating gall densities of Paradiplosis tumifex (Diptera: Cecidomyiidae

Heard, Stephen B.

176

How Bandwidth Selection Algorithms Impact Exploratory Data Analysis Using Kernel Density Estimation

Exploratory data analysis (EDA) is important, yet often overlooked in the social and behavioral sciences. Graphical analysis of one's data is central to EDA. A viable method of estimating and graphing the underlying density in EDA is kernel density...

Harpole, Jared Kenneth

2013-05-31

177

Demonstration of line transect methodologies to estimate urban gray squirrel density

Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.

Hein, E.W. [Los Alamos National Lab., NM (United States)] [Los Alamos National Lab., NM (United States)

1997-11-01

178

The research work presented in this paper is to achieve the tissue classification and automatically diagnosis the abnormal tumor region present in Computed Tomography (CT) images using the wavelet based statistical texture analysis method. Comparative studies of texture analysis method are performed for the proposed wavelet based texture analysis method and Spatial Gray Level Dependence Method (SGLDM). Our proposed system consists of four phases i) Discrete Wavelet Decomposition (ii) Feature extraction (iii) Feature selection (iv) Analysis of extracted texture features by classifier. A wavelet based statistical texture feature set is derived from normal and tumor regions. Genetic Algorithm (GA) is used to select the optimal texture features from the set of extracted texture features. We construct the Support Vector Machine (SVM) based classifier and evaluate the performance of classifier by comparing the classification results of the SVM based classifier with the Back Propagation Neural network classifier(BPN...

Padma, A

2011-01-01

179

This chapter discusses estimating the biomass density of forest vegetation. Data from inventories of tropical Asia and America were used to estimate biomass densities. Efforts to quantify forest disturbance suggest that population density, at subnational scales, can be used as a surrogate index to encompass all the anthropogenic activities (logging, slash-and-burn agriculture, grazing) that lead to degradation of tropical forest biomass density.

Brown, S.

1996-07-01

180

Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

Chen, Rongda; Wang, Ze

2013-01-01

181

Estimation of Parent Specific DNA Copy Number in Tumors using High-Density Genotyping Arrays

Estimation of Parent Specific DNA Copy Number in Tumors using High-Density Genotyping Arrays Hao the current high density genotyping platforms. The proposed method does not require matched normal samples, and can estimate the unknown genotypes simultaneously with the parent specific copy number. The new method

Zhang, Nancy R.

182

MobSampling: V2V Communications for Traffic Density Estimation

to monitor CO2 emissions in different areas of a metropolitan region. The paper is organized as followsMobSampling: V2V Communications for Traffic Density Estimation Laura Garelli, Claudio Casetti estimation of vehicle traffic density. Our approach envisions vehicles communicating within a VANET

Fiore, Marco

183

Adaptive quadratic functional estimation of a weighted density by model selection

We consider the problem of estimating the integral of the square of a probability density function f on the basis of a random sample from a weighted distribution. Specifically, using model selection via a penalized criterion, an adaptive estimator for ? f based on weighted data is proposed for probability density functions which are uniformly bounded and belong to certain

Athanasia Petsa; Theofanis Sapatinas

2010-01-01

184

On the analysis of wavelet-based approaches for print grain artifacts

NASA Astrophysics Data System (ADS)

Grain is one of several attributes described in ISO/IEC TS 24790, a technical specification for the measurement of image quality for monochrome printed output. It defines grain as aperiodic fluctuations of lightness greater than 0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC 13660. Since this definition places no bounds on the upper frequency range, higher-frequency fluctuations (such as those from the printer's halftone pattern) could contribute significantly to the measurement of grain artifacts. In a previous publication, we introduced a modification to the ISO/IEC 13660 grain measurement algorithm that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations. This modification improves the algorithm's correlation with the subjective evaluation of experts who rated the severity of printed grain artifacts. Seeking to improve upon the grain algorithm in ISO/IEC 13660, the ISO/IEC TS 24790 committee evaluated several graininess metrics. This led to the selection of the above wavelet-based approach as the top candidate algorithm for inclusion in a future ISO/IEC standard. Our recent experimental results showed r2 correlation of 0.9278 between the wavelet-based approach and the subjective evaluation conducted by the ISO committee members based upon 26 samples covering a variety of printed grain artifacts. On the other hand, our experiments on the same data set showed much lower correlation (r2 = 0.3555) between the ISO/IEC 13660 approach and the same subjective evaluation of the ISO committee members. In addition, we introduce an alternative approach for measuring grain defects based on spatial frequency analysis of wavelet-filtered images. Our goal is to establish a link between the spatial-based grain (ISO/IEC TS 24790) approach and its equivalent frequency-based one in light of Parseval's theorem. Our experimental results showed r2 correlation near 0.99 between the spatial and frequency-based approaches.

Eid, Ahmed H.; Cooper, Brian E.; Rippetoe, Edward E.

2013-01-01

185

Two versions of a stage-structured model of Cirsium vulgare population dynamics were developed. Both incorporated density dependence at one stage in the life cycle of the plant. In version 1 density dependence was assumed to operate during germination whilst in version 2 it was included at the seedling stage. Density-dependent parameter values for the model were estimated from annual census

M. Gillman; J. M. Bullock; J. Silvertown; B. Clear Hill

1993-01-01

186

A continuous bivariate model for wind power density and wind turbine energy output estimations

The wind power probability density function is useful in both the design process of a wind turbine and in the evaluation process of the wind resource available at a potential site. The continuous probability models used in the scientific literature to estimate the wind power density distribution function and wind turbine energy output assume that air density is independent of

José Antonio Carta; Dunia Mentado

2007-01-01

187

This work has aimed to contribute to the prothesis-bionic hand studies. Four hundred eighty signals used in this work correspond to position of adduction motion of thumb, flexion motion of thumb, abduction motion of fingers were collected by surface electrodes. Eight healthy has participated for collecting by surface electromyogram (SEMG). The wavelet based autoregressive models of collected signals are used

I. Yazici; E. Koklukaya; B. Baslo

2009-01-01

188

Wavelet-based correlations of impedance cardiography signals and heart rate variability

NASA Astrophysics Data System (ADS)

The wavelet-based correlation analysis is employed to study impedance cardiography signals (variation in the impedance of the thorax z(t) and time derivative of the thoracic impedance (- dz/dt)) and heart rate variability (HRV). A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. The modulus of wavelet-correlation function shows the level of correlation, and the phase indicates the mean phase shift of oscillations at the given scale (frequency). Significant correlations essentially exceeding the values obtained for noise signals are defined within two spectral ranges, which correspond to respiratory activity (0.14-0.5 Hz), endothelial related metabolic activity and neuroendocrine rhythms (0.0095-0.02 Hz). Probably, the phase shift of oscillations in all frequency ranges is related to the peculiarities of parasympathetic and neuro-humoral regulation of a cardiovascular system.

Podtaev, Sergey; Dumler, Andrew; Stepanov, Rodion; Frick, Peter; Tziberkin, Kirill

2010-04-01

189

A new algorithm for wavelet-based heart rate variability analysis

One of the most promising non-invasive markers of the activity of the autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits often provide spectral analysis techniques using the Fourier transform, which assumes that the heart rate series is stationary. To overcome this issue, the Short Time Fourier Transform is often used (STFT). However, the wavelet transform is thought to be a more suitable tool for analyzing non-stationary signals than the STFT. Given the lack of support for wavelet-based analysis in HRV toolkits, such analysis must be implemented by the researcher. This has made this technique underutilized. This paper presents a new algorithm to perform HRV power spectrum analysis based on the Maximal Overlap Discrete Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any spectral band with a given tolerance for the band's boundaries. The MODWPT decomposition tree is pruned to avoid calculating unnecessary wavelet coefficients, thereby optimizing execution t...

García, Constantino A; Vila, Xosé; Márquez, David G

2014-01-01

190

NASA Astrophysics Data System (ADS)

Synchrotron radiation x-ray microtomography is becoming a uniquely powerful method to nondestructively access three-dimensional internal microstructure in biological and engineering materials, with a resolution of 1?m or less. The tiny field of view of the detector, however, requires that the sample has to be strictly small, which would limit the practical applications of the method such as in situ experiments. In this paper, a wavelet-based local tomography algorithm is proposed to recover a small region of interest inside a large object only using the local projections, which is motivated by the localization property of wavelet transform. Local tomography experiment for an Al-Cu alloy is carried out at SPring-8, the third-generation synchrotron radiation facility in Japan. The proposed method readily enables the high-resolution observation for a large specimen, by which the applicability of the current microtomography would be promoted to a large extent.

Li, Lingqi; Toda, Hiroyuki; Ohgaki, Tomomi; Kobayashi, Masakazu; Kobayashi, Toshiro; Uesugi, Kentaro; Suzuki, Yoshio

2007-12-01

191

Wavelet-based built-in damage detection and identification for composites

NASA Astrophysics Data System (ADS)

In this paper, a wavelet-based built-in damage detection and identification algorithm for carbon fiber reinforced polymer (CFRP) laminates is proposed. Lamb waves propagating in laminates are first modeled analytically using higher-order plate theory and compared them with experimental results in terms of group velocity. Distributed piezoelectric transducers are used to generate and monitor the fundamental ultrasonic Lamb waves in the laminates with narrowband frequencies. A signal processing scheme based on wavelet analysis is applied on the sensor signals to extract the group velocity of the wave propagating in the laminates. Combined with the theoretically computed wave velocity, a genetic algorithms (GA) optimization technique is employed to identify the location and size of the damage. The applicability of this proposed method to detect and size the damage is demonstrated by experimental studies on a composite plate with simulated delamination damages.

Yan, G.; Zhou, Lily L.; Yuan, F. G.

2005-05-01

192

An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

1998-11-01

193

Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices

NASA Astrophysics Data System (ADS)

The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.

Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.

2013-12-01

194

Evaluation of a new wavelet-based compression algorithm for synthetic aperture radar images

NASA Astrophysics Data System (ADS)

In this paper we will discuss the performance of a new wavelet based embedded compression algorithm on synthetic aperture radar (SAR) image data. This new algorithm uses index coding on the indices of the discrete wavelet transform of the image data and provides an embedded code to successively approximate it. Results on compressing still images, medical images as well as seismic traces indicate that the new algorithm performs quite competitively with other image compression algorithms. The evaluation for SAR image compression of it will be presented in this paper. One advantage of the new algorithm presented here is that the compressed data is encoded in such a way as to facilitate processing in the compressed wavelet domain, which is a significant aspect considering the rate at which SAR data is collected and the desire to process the data 'near real time'.

Tian, Jun; Guo, Haitao; Wells, Raymond O., Jr.; Burrus, C. Sidney; Odegard, Jan E.

1996-06-01

195

Wavelet-based correlation (WBC) of zoned crystal populations and magma mixing

NASA Astrophysics Data System (ADS)

Magma mixing is a common process and yet the rates, kinematics and numbers of events are difficult to establish. One expression of mixing is the major, trace element, and isotopic zoning in crystals, which provides a sequential but non-monotonic record of the creation and dissipation of volumes of distinct chemical potential. We demonstrate a wavelet-based correlation (WBC) technique that uses this zoning for the recognition of the minimum number of mixing, or open-system events, and the criteria for identifying populations of crystals that have previously shared a mixing event. When combined with field observations of the spatial distribution of crystal populations, WBC provides a statistical link between the time-varying thermodynamic and fluid dynamic history of the magmatic system. WBC can also be used as a data mining utility to reveal open-system events where outcrop is sparse. An analysis of zoned plagioclase from the Tuolumne Intrusive Suite provides a proof of principle for WBC.

Wallace, Glen S.; Bergantz, George W.

2002-08-01

196

An Evaluation of the Accuracy of Kernel Density Estimators for Home Range Analysis

Abstract. Kernel density estimators are becoming more widely used, particularly as home range estimators. Despite extensive interest in their theoretical properties, little em- pirical research,has been,done,to investigate,their performance,as home,range estimators. We used,computer,simulations,to compare,the area and shape,of kernel density estimates to the true area and shape,of multimodal,two-dimensional,distributions. The fixed kernel gave,area estimates,with very little bias when,least squares,cross validation was,used to select

D. Erran Seaman; Roger A. Powell

2008-01-01

197

Multi-dimensional Density Estimation David W. Scott a,,1

-validation, Curse of dimensionality, Exploratory data analysis, Frequency polygons, Histograms, Kernel estimators at Denver, Denver, CO 80217-3364 USA Abstract Modern data analysis requires a number of tools to undercover of techniques and a willingness to go beyond simple univariate methodologies. Many experimental scientists today

Scott, David W.

198

A New Computational Approach to Density Estimation with ...

According to the level of difficulty of SDP to be solved later, we employed. (a) Optimization by ..... estimation of a survival function for medical data etc. This is another .... ing Positive Functions. Ph. D. Thesis, SUNY, Buffalo, New York, 1976. 18 ...

2003-12-19

199

Asymptotic equivalence of density estimation and Gaussian white noise

Signal recovery in Gaussian white noise with variance tending to zero has served for some time as a representative model for nonparametric curve estimation, having all the essential traits in a pure form. The equivalence has mostly been stated informally, but an approximation in the sense of Le Cam's deficiency distance $\\\\Delta$ would make it precise. The models are then

Michael Nussbaum

1996-01-01

200

Estimates of cetacean abundance, biomass, and population density are

be affected by anthropogenic sound (e.g., sonar, ship noise, and seismic surveys) and cli- mate change., 1997). Large whales also die from ship strikes (Carretta et al., 2006). West coast cetaceans may of cetaceans along the U.S. west coast were estimated from ship surveys conducted in the summer and fall

201

Estimating insect flight densities from attractive trap catches and flight height distributions.

Methods and equations have not been developed previously to estimate insect flight densities, a key factor in decisions regarding trap and lure deployment in programs of monitoring, mass trapping, and mating disruption with semiochemicals. An equation to estimate densities of flying insects per hectare is presented that uses the standard deviation (SD) of the vertical flight distribution, trapping time, the trap's spherical effective radius (ER), catch at the mean flight height (as estimated from a best-fitting normal distribution with SD), and an estimated average flight speed. Data from previous reports were used to estimate flight densities with the equations. The same equations can use traps with pheromone lures or attractive colors with a measured effective attraction radius (EAR) instead of the ER. In practice, EAR is more useful than ER for flight density calculations since attractive traps catch higher numbers of insects and thus can measure lower populations more readily. Computer simulations in three dimensions with varying numbers of insects (density) and varying EAR were used to validate the equations for density estimates of insects in the field. Few studies have provided data to obtain EAR, SD, speed, and trapping time to estimate flight densities per hectare. However, the necessary parameters can be measured more precisely in future studies. PMID:22527056

Byers, John A

2012-05-01

202

Daytime fog detection and density estimation with entropy minimization

NASA Astrophysics Data System (ADS)

Fog disturbs the proper image processing in many outdoor observation tools. For instance, fog reduces the visibility of obstacles in vehicle driving applications. Usually, the estimation of the amount of fog in the scene image allows to greatly improve the image processing, and thus to better perform the observation task. One possibility is to restore the visibility of the contrasts in the image from the foggy scene image before applying the usual image processing. Several algorithms were proposed in the recent years for defogging. Before to apply the defogging, it is necessary to detect the presence of fog, not to emphasis the contrasts due to noise. Surprisingly, few a reduced number of image processing algorithms were proposed for fog detection and characterization. Most are dedicated to static cameras and can not be used when the camera is moving. Daytime fog is characterized by its extinction coefficient, which is equivalent to the visibility distance. A visibility-meter can be used for fog detection and characterization, but this kind of sensor performs an estimation in a relatively small volume of air, and is thus sensitive to heterogeneous fog, and air turbulence with moving cameras. In this paper, we propose an original algorithm, based on entropy minimization, to detect fog and estimate its extinction coefficient by the processing of stereo pairs. This algorithm is fast, provides accurate results using low cost stereo camera sensor and, the more important, can work when the cameras are moving. The proposed algorithm is evaluated on synthetic and camera images with ground truth. Results show that the proposed method is accurate, and, combined with a fast stereo reconstruction algorithm, should provide a solution, close to real time, for fog detection and visibility estimation for moving sensors.

Caraffa, L.; Tarel, J. P.

2014-08-01

203

Unbiased SVM Density Estimation with Application to Graphical Pattern Recognition

Classification of structured data (i.e., data that are repre- sented as graphs) is a topic of interest in the machine learning community. This paper presents a different, simple approach to the problem of struc- tured pattern recognition, relying on the description of graphs in terms of algebraic binary relations. Maximum-a-posteriori decision rules over relations require the estimation of class-conditional probability

Edmondo Trentin; Ernesto Di Iorio

2007-01-01

204

Estimated global nitrogen deposition using NO2 column density

Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m?2 yr?1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m?2 yr?1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

2013-01-01

205

The estimation of the gradient of a density function, with applications in pattern recognition

Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in ak-nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality

KEINOSUKE FUKUNAGA; LARRY D. HOSTETLER

1975-01-01

206

Probabilistic Analysis and Density Parameter Estimation Within Nessus

NASA Technical Reports Server (NTRS)

This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

2002-01-01

207

Radiation Pressure Detection and Density Estimate for 2011 MD

NASA Astrophysics Data System (ADS)

We present our astrometric observations of the small near-Earth object 2011 MD (H ~ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5?. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of \\rho = (640+/- 330) kg \\, m ^{-3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

Micheli, Marco; Tholen, David J.; Elliott, Garrett T.

2014-06-01

208

Stand delineation is one of the cornerstones of forest inventory mapping and a key element to spatial aspects in forest management\\u000a decision making. Stands are forest management units with similarity in attributes such as species composition, density, closure,\\u000a height and age. Stand boundaries are traditionally estimated through subjective visual air photo interpretation. In this paper,\\u000a an automatic stand delineation method

F. M. B. Van Coillie; L. P. C. Verbeke; R. R. De Wulf

209

A comparison of 2 techniques for estimating deer density

We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

Storm, G.L.; Cottam, D.F.; Yahner, R.H.; Nichols, J.D.

1977-01-01

210

Magnetocardiographic and body surface potential mapping data measured in 6 patients with multivessel coronary artery disease were used in equivalent current-density estimation (CDE). Patient-specific boundary-element torso models were acquired from magnetic resonance images. Positron emission tomography data registrated with anatomical magnetic resonance imaging data provided the gold standard. Discrete current-density estimation values were computed on the epicardial surface of the

Jukka Nenonen; Katja Pesola; Kirsi Lauerma; Panu Takala; Juhani Knuuti; Lauri Toivonen; Toivo Katila

2001-01-01

211

Confident estimation for density of a biological population based on line transect sampling

Line transect sampling is a very useful method in survey of wildlife population. Confident interval estimation for density\\u000a D of a biological population is proposed based on a sequential design. The survey area is occupied by the population whose\\u000a size is unknown. A stopping rule is proposed by a kernel-based estimator of density function of the perpendicular data at\\u000a a

Ren-bin Gong; Yun-bei Ma; Yong Zhou

2010-01-01

212

Density Estimation with Confidence Sets Exemplified by Superclusters and Voids in the Galaxies

A method is presented for forming both a point estimate and a confidence set of semiparametric densities. The final product is a three-dimensional figure that displays a selection of density estimates for a plausible range of smoothing parameters. The boundaries of the smoothing parameter are determined by a nonparametric goodness-of-fit test that is based on the sample spacings. For each

Kathryn Roeder

1990-01-01

213

Density meter algorithm and system for estimating sampling/mixing uncertainty

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

Shine, E.P.

1986-01-01

214

Density meter algorithm and system for estimating sampling/mixing uncertainty

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

Shine, E P

1986-01-01

215

In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors

Technology Transfer Automated Retrieval System (TEKTRAN)

The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...

216

The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition

The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition Thomas K scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb of these low energy densities for species feeding on jellyfish are discussed. © 2007 Elsevier B.V. All rights

Hays, Graeme

217

In this study, models for estimating the cell density of isotropic polymeric foams using the surface cell density were developed. The basic morphological unit cell for these models is a gas-filled pentagonal dodecahedral cell cavity. The critical bubble lattice model was introduced to associate the packing structure of the pentagonal dodecahedral cells with a face-centered cubic (FCC) packing structure, and

Piyapong Buahom; Surat Areerat

2011-01-01

218

Density meter algorithm and system for estimating sampling\\/mixing uncertainty

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling\\/mixing and measurement uncertainties in the process and to provide a

Shine

1986-01-01

219

Density meter algorithm and system for estimating sampling\\/mixing uncertainty

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling\\/mixing and measurement uncertainties in the process and to provide a

Shine

1986-01-01

220

Sensitivity analysis and density estimation for finite-time ruin probabilities

solvency regulations in Europe. This problem is closely related to that of density estimation since - x (x-continuous) density functions of infima of reserve processes commonly used in insurance. In particular we show, using words: Ruin probability, Malliavin calculus, insurance, integration by parts. MSC Classification codes

Paris-Sud XI, UniversitÃ© de

221

Estimating beaked whale density from single hydrophones by means of propagation modeling

Estimating beaked whale density from single hydrophones by means of propagation modeling Elizabeth Warfare Center) #12;Outline Overview of DECAF project Blainville's beaked whales Study area and available of DECAF project Blainville's beaked whales Study area and available acoustic data How do we estimate

Thomas, Len

222

Density estimation for small mammals from livetrapping grids: rodents in northern Canada

on livetrapping grids with 4 estimators applied to 3 species of boreal forest and 3 species of tundra rodents), and 56 trapping sessions from tundra areas of Herschel Island and Komakuk Beach in northern Yukon (n 5 1 to 25 animals/ha. For tundra rodents both boundary-strip methods produced density estimates smaller than

Krebs, Charles J.

223

Technology Transfer Automated Retrieval System (TEKTRAN)

Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

224

Comparison of Fish Density Estimates from Repeated Hydroacoustic Surveys on Two Wyoming Waters

The ability to actively sample fish populations is a major advantage of hydroacoustic assessment. This technique does not affect fish behavior, and it typically produces more precise abundance estimates than do other gears. Thus, hydroacoustic surveys repeated on a closed population should produce similar fish density estimates. We sought to demonstrate this on inland waters using multiplexed side- and down-looking

R. Scott Gangl; Roy A. Whaley

2004-01-01

225

A bound for the smoothing parameter in certain well-known nonparametric density estimators

NASA Technical Reports Server (NTRS)

Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

Terrell, G. R.

1980-01-01

226

ERIC Educational Resources Information Center

The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

Woods, Carol M.; Thissen, David

2006-01-01

227

Nonparametric maximum likelihood estimation of probability densities by penalty function methods

NASA Technical Reports Server (NTRS)

When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

1974-01-01

228

Estimations of bulk geometrically necessary dislocation density using high resolution EBSD.

Characterizing the content of geometrically necessary dislocations (GNDs) in crystalline materials is crucial to understanding plasticity. Electron backscatter diffraction (EBSD) effectively recovers local crystal orientation, which is used to estimate the lattice distortion, components of the Nye dislocation density tensor (?), and subsequently the local bulk GND density of a material. This paper presents a complementary estimate of bulk GND density using measurements of local lattice curvature and strain gradients from more recent high resolution EBSD (HR-EBSD) methods. A continuum adaptation of classical equations for the distortion around a dislocation are developed and used to simulate random GND fields to validate the various available approximations of GND content. PMID:23751207

Ruggles, T J; Fullwood, D T

2013-10-01

229

A potentially important family of self-similar signals based upon a deterministic scale-invariance characterization is introduced. These signals, which are referred to as `dy-homogeneous' signals because they generalize the well-known homogeneous functions, have highly convenient representations in terms of orthonormal wavelet bases. In particular, wavelet representations can be exploited to construct orthonormal self-similar bases for these signals. The spectral and fractal

Gregory W. Wornell; Alan V. Oppenheim

1992-01-01

230

Performance evaluation of wavelet-based face verification on a PDA recorded database

NASA Astrophysics Data System (ADS)

The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

Sellahewa, Harin; Jassim, Sabah A.

2006-05-01

231

Wavelet-based decomposition and analysis of structural patterns in astronomical images

NASA Astrophysics Data System (ADS)

Context. Images of spatially resolved astrophysical objects contain a wealth of morphological and dynamical information, and effectively extracting this information is of paramount importance for understanding the physics and evolution of these objects. The algorithms and methods currently employed for this purpose (such as Gaussian model fitting) often use simplified approaches to describe the structure of resolved objects. Aims: Automated (unsupervised) methods for structure decomposition and tracking of structural patterns are needed for this purpose to be able to treat the complexity of structure and large amounts of data involved. Methods: We developed a new wavelet-based image segmentation and evaluation (WISE) method for multiscale decomposition, segmentation, and tracking of structural patterns in astronomical images. Results: The method was tested against simulated images of relativistic jets and applied to data from long-term monitoring of parsec-scale radio jets in 3C 273 and 3C 120. Working at its coarsest resolution, WISE reproduces the previous results of a model-fitting evaluation of the structure and kinematics in these jets exceptionally well. Extending the WISE structure analysis to fine scales provides the first robust measurements of two-dimensional velocity fields in these jets and indicates that the velocity fields probably reflect the evolution of Kelvin-Helmholtz instabilities that develop in the flow.

Mertens, Florent; Lobanov, Andrei

2015-02-01

232

Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

2014-01-01

233

Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds

We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well defined spatial properties, we calibrate the method and test its limitations to ultimately apply it to a set of observed maps. We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behavior without prominent scales. Observational noise and a finite map size limit the scales where the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from the observational noise is negligible for signal-to-noise ratios >5. (abbrev). Applic...

Arshakian, T G

2015-01-01

234

Wavelet-based double-difference seismic tomography with sparsity regularization

NASA Astrophysics Data System (ADS)

We have developed a wavelet-based double-difference (DD) seismic tomography method. Instead of solving for the velocity model itself, the new method inverts for its wavelet coefficients in the wavelet domain. This method takes advantage of the multiscale property of the wavelet representation and solves the model at different scales. A sparsity constraint is applied to the inversion system to make the set of wavelet coefficients of the velocity model sparse. This considers the fact that the background velocity variation is generally smooth and the inversion proceeds in a multiscale way with larger scale features resolved first and finer scale features resolved later, which naturally leads to the sparsity of the wavelet coefficients of the model. The method is both data- and model-adaptive because wavelet coefficients are non-zero in the regions where the model changes abruptly when they are well sampled by ray paths and the model is resolved from coarser to finer scales. An iteratively reweighted least squares procedure is adopted to solve the inversion system with the sparsity regularization. A synthetic test for an idealized fault zone model shows that the new method can better resolve the discontinuous boundaries of the fault zone and the velocity values are also better recovered compared to the original DD tomography method that uses the first-order Tikhonov regularization.

Fang, Hongjian; Zhang, Haijiang

2014-11-01

235

Matrix-free application of Hamiltonian operators in Coifman wavelet bases

NASA Astrophysics Data System (ADS)

A means of evaluating the action of Hamiltonian operators on functions expanded in orthogonal compact support wavelet bases is developed, avoiding the direct construction and storage of operator matrices that complicate extension to coupled multidimensional quantum applications. Application of a potential energy operator is accomplished by simple multiplication of the two sets of expansion coefficients without any convolution. The errors of this coefficient product approximation are quantified and lead to use of particular generalized coiflet bases, derived here, that maximize the number of moment conditions satisfied by the scaling function. This is at the expense of the number of vanishing moments of the wavelet function (approximation order), which appears to be a disadvantage but is shown surmountable. In particular, application of the kinetic energy operator, which is accomplished through the use of one-dimensional (1D) [or at most two-dimensional (2D)] differentiation filters, then degrades in accuracy if the standard choice is made. However, it is determined that use of high-order finite-difference filters yields strongly reduced absolute errors. Eigensolvers that ordinarily use only matrix-vector multiplications, such as the Lanczos algorithm, can then be used with this more efficient procedure. Applications are made to anharmonic vibrational problems: a 1D Morse oscillator, a 2D model of proton transfer, and three-dimensional vibrations of nitrosyl chloride on a global potential energy surface.

Acevedo, Ramiro; Lombardini, Richard; Johnson, Bruce R.

2010-06-01

236

A wavelet-based adaptive fusion algorithm of infrared polarization imaging

NASA Astrophysics Data System (ADS)

The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

2011-08-01

237

NASA Astrophysics Data System (ADS)

Electrical Impedance Tomography is a soft-field tomography modality, where image reconstruction is formulated as a non-linear least-squares model fitting problem. The Newton-Rahson scheme is used for actually reconstructing the image, and this involves three main steps: forward solving, computation of the Jacobian, and the computation of the conductivity update. Forward solving relies typically on the finite element method, resulting in the solution of a sparse linear system. In typical three dimensional biomedical applications of EIT, like breast, prostate, or brain imaging, it is desirable to work with sufficiently fine meshes in order to properly capture the shape of the domain, of the electrodes, and to describe the resulting electric filed with accuracy. These requirements result in meshes with 100,000 nodes or more. The solution the resulting forward problems is computationally intensive. We address this aspect by speeding up the solution of the FEM linear system by the use of efficient numeric methods and of new hardware architectures. In particular, in terms of numeric methods, we solve the forward problem using the Conjugate Gradient method, with a wavelet-based algebraic multigrid (AMG) preconditioner. This preconditioner is faster to set up than other AMG preconditoiners which are not based on wavelets, it does use less memory, and provides for a faster convergence. We report results for a MATLAB based prototype algorithm an we discuss details of a work in progress for a GPU implementation.

Borsic, A.; Bayford, R.

2010-04-01

238

Estimation of mechanical properties of panels based on modal density and mean mobility measurements

NASA Astrophysics Data System (ADS)

The mechanical characteristics of wood panels used by instrument makers are related to numerous factors, including the nature of the wood or characteristic of the wood sample (direction of fibers, micro-structure nature). This leads to variations in Young's modulus, the mass density, and the damping coefficients. Existing methods for estimating these parameters are not suitable for instrument makers, mainly because of the need of expensive experimental setups, or complicated protocols, which are not adapted to a daily practice in a workshop. In this paper, a method for estimating Young's modulus, the mass density, and the modal loss factors of flat panels, requiring a few measurement points and an affordable experimental setup, is presented. It is based on the estimation of two characteristic quantities: the modal density and the mean mobility. The modal density is computed from the values of the modal frequencies estimated by the subspace method ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques), associated with the signal enumeration technique ESTER (ESTimation of ERror). This modal identification technique is proved to be robust in the low- and the mid-frequency domains, i.e. when the modal overlap factor does not exceed 1. The estimation of the modal parameters also enables the computation of the modal loss factor in the low- and the mid-frequency domains. An experimental fit with the theoretical expressions for the modal density and the mean mobility enables an accurate estimation of Young's modulus and the mass density of flat panels. A numerical and an experimental study show that the method is robust, and that it requires solely a few measurement points.

Elie, Benjamin; Gautier, François; David, Bertrand

2013-11-01

239

Estimation of tiger densities in India using photographic captures and recaptures

Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

Karanth, U.; Nichols, J.D.

1998-01-01

240

Estimating detection and density of the Andean cat in the high Andes

The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

Reppucci, J.; Gardner, B.; Lucherini, M.

2011-01-01

241

Estimating detection and density of the Andean cat in the high Andes

The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

2011-01-01

242

Efficient estimation of power spectral density from laser Doppler anemometer data

NASA Astrophysics Data System (ADS)

A non-biased estimator of power spectral density (PSD) is introduced for data obtained from a zeroth order interpolated laser Doppler anemometer (LDA) data set. The systematic error, sometimes referred to as the ``particle-rate filter'' effect, is removed using an FIR filter parameterized using the mean particle rate. Independent from this, a procedure for estimating the measurement system noise is introduced and applied to the estimated spectra. The spectral estimation is performed in the domain of the autocorrelation function and assumes no further process parameters. The new technique is illustrated using simulated and measured data, in the latter case with direct comparison to simultaneously acquired hot-wire data.

Nobach, H.; Müller, E.; Tropea, C.

243

Effects of tissue heterogeneity on the optical estimate of breast density

Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635–1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

2012-01-01

244

Effects of tissue heterogeneity on the optical estimate of breast density.

Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635-1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

2012-10-01

245

Volumetric Breast Density Estimation from Full-Field Digital Mammograms: A Validation Study

Objectives To objectively evaluate automatic volumetric breast density assessment in Full-Field Digital Mammograms (FFDM) using measurements obtained from breast Magnetic Resonance Imaging (MRI). Material and Methods A commercially available method for volumetric breast density estimation on FFDM is evaluated by comparing volume estimates obtained from 186 FFDM exams including mediolateral oblique (MLO) and cranial-caudal (CC) views to objective reference standard measurements obtained from MRI. Results Volumetric measurements obtained from FFDM show high correlation with MRI data. Pearson’s correlation coefficients of 0.93, 0.97 and 0.85 were obtained for volumetric breast density, breast volume and fibroglandular tissue volume, respectively. Conclusions Accurate volumetric breast density assessment is feasible in Full-Field Digital Mammograms and has potential to be used in objective breast cancer risk models and personalized screening. PMID:24465808

Gubern-Mérida, Albert; Kallenberg, Michiel; Platel, Bram; Mann, Ritse M.; Martí, Robert; Karssemeijer, Nico

2014-01-01

246

An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing

A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.

2014-01-01

247

We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA (P < 0.02). Furthermore, wfANOVA required significantly fewer (?¼×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision (r2 = 0.94 ± 0.08) and performed better than tANOVA across noise levels (P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power. PMID:23100136

McKay, J. Lucas; Welch, Torrence D. J.; Vidakovic, Brani

2013-01-01

248

NASA Astrophysics Data System (ADS)

Igneous rocks often show evidence for repeated mixing of distinctive magmas and/or redistribution of within-chamber chemical domains. This is expressed by hybridization trends and changes in isotope ratios at the outcrop and crystal scale, composite dikes, crystal transfer fabrics, and flow structures. We will demonstrate the use of Wavelet Based Correlation (WBC) of crystal zoning populations as a means of 'inverting' for the schedule of magma generation, mixing, crystal growth, and eruption in a structured time-stratigraphic framework.WBC is a new tool that uses the Continuous Wavelet Transform (CWT) to characterize zoning profiles, correlation coefficients of select sets of zoning features to describe crystal similarity, and cluster analysis of correlation coefficients to group crystals into populations. The integrating concepts are the notions of spatial proximity, both within and between samples, of statistical groupings of crystals (clusters) that have experienced a similar thermo-chemo environment at some previous time, and their dispersal and gathering to form new families of clusters. This allows for the construction of a crystal-based phylogeny for the magmatic system where mixing and fractionation events can be ordered and recognized as acting in sequence or in parallel, and the vigor and duration of a mixing event can be inferred from particle dispersal, gathering and zoning. CWT decomposition allows direct comparison of specific components of crystal zoning patterns because the locations of individual spectral features are preserved. For example, boundary layer diffusion growth effects, rapid mixing events and pressure changes tend to have small scales. Using WBC, the data can be windowed in scale space to isolate small-scale details in the profile independent of all other scales of features in the profile. Conversely, large-scale features such as fractional crystallization trends can be isolated in the zoning signal. WBC can provide a statistical binding point between geochemical and dynamic studies of igneous systems.

Wallace, G. S.; Bergantz, G. W.

2001-12-01

249

Fast and accurate probability density estimation in large high dimensional astronomical datasets

NASA Astrophysics Data System (ADS)

Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

2015-01-01

250

The accurate quantitation of high density lipo- proteins has recently assumed greater importance in view of studies suggesting their negative correlation with coronary heart disease. High density lipoproteins may be estimated by measuring cholesterol in the plasma frac- tion of d > 1.063 g\\/ml. A more practical approach is the specific precipitation of apolipoprotein B (apoB)-contain- ing lipoproteins by sulfated

G. Russell Warnick; John J. Albers

251

Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

2014-01-01

252

Trap array configuration influences estimates and precision of black bear density and abundance.

Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L

2014-01-01

253

Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250

Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès

2014-06-01

254

Mid-latitude Ionospheric Storms Density Gradients, Winds, and Drifts Estimated from GPS TEC Imaging

NASA Astrophysics Data System (ADS)

Ionospheric storm processes at mid-latitudes stand in stark contrast to the typical quiescent behavior. Storm enhanced density (SED) on the dayside affects continent-sized regions horizontally and are often associated with a plume that extends poleward and upward into the nightside. One proposed cause of this behavior is the sub-auroral polarization stream (SAPS) acting on the SED, and neutral wind effects. The electric field and its effect connecting mid-latitude and polar regions are just beginning to be understood and modeled. Another possible coupling effect is due to neutral winds, particularly those generated at high latitudes by joule heating effects. Of particular interest are electric fields and winds along the boundaries of the SED and plume, because these may be at least partly a cause of sharp horizontal electron density gradients. Thus, it is important to understand what bearing the drifts and winds, and any spatial variations in them (e.g., shear), have on the structure of the enhancement, particularly at its boundaries. Imaging techniques based on GPS TEC play a significant role in study of mid-latitude storm dynamics, particularly at mid-latitudes, where sampling of the ionosphere with ground-based GPS lines of sight is most dense. Ionospheric Data Assimilation 4-Dimensional (IDA4D) is a plasma density estimation algorithm that has been used in a number of scientific investigations over several years. Recently, efforts to estimate drivers of the mid-latitude ionosphere, focusing on electric-field-induced drifts and neutral winds, based on GPS TEC high-resolution imaging have shown promise. Estimating Ionospheric Parameters from Ionospheric Reverse Engineering (EMPIRE) is a tool developed that addresses this kind of investigation. In this work electron density and driver estimates are presented for an ionospheric storm using IDA4D in conjunction with EMPIRE. The IDA4D estimates resolve F-region electron densities at 1-degree resolution at the region of passage of the SED and associated plume. High-resolution imaging is used in conjunction with EMPIRE to deduce the dominant drivers. Starting with a baseline Weimer 2001 electric potential model, adjustments to the Weimer model are estimated for the given storm based on the IDA4D-derived densities to show electric fields associated with the plume. These regional densities and drivers are compared to CHAMP and DMSP data that are proximal for validation. Gradients in electron density are numerically computed over the 1-degree region. These density gradients are correlated with the drift estimates to identify a possible causal relationship in the formation of the boundaries of the SED.

Datta-Barua, S.; Bust, G. S.

2012-12-01

255

A hierarchical model for estimating density in camera-trap studies

1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

Royle, J.A.; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.

2009-01-01

256

Hierarchical models for estimating density from DNA mark-recapture studies.

Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps (e.g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS. PMID:19449704

Gardner, Beth; Royle, J Andrew; Wegan, Michael T

2009-04-01

257

New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. PMID:23205694

Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

2013-09-01

258

Hierarchical models for estimating density from DNA mark-recapture studies

Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

Gardner, B.; Royle, J.A.; Wegan, M.T.

2009-01-01

259

A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder

NASA Astrophysics Data System (ADS)

Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.

Schroth-Miller, Madeline L.

260

Using Stopping Rules to Bound the Mean Integrated Squared Error in Density Estimation

Suppose $X_1,X_2,\\\\ldots,X_n$ are i.i.d. with unknown density $f$. There is a well-known expression for the asymptotic mean integrated squared error (MISE) in estimating $f$ by a kernel estimate $\\\\hat{f}_n$, under certain conditions on $f$, the kernel and the bandwidth. Suppose that one would like to choose a sample size so that the MISE is smaller than some preassigned positive number

Adam T. Martinsek

1992-01-01

261

Plug-In Two-Stage Normal Density Estimation Under MISE Loss: Unknown Variance

Consider independent observations X1, X2,… having a common normal probability density function with ? ? < x < ? and unknown variance ? ( > 0). We propose to estimate f(x;?) by a plug-in maximum likelihood (ML) two-stage estimator under the mean integrated squared error (MISE) loss function. Our goal is to make the associated risk not to exceed a preassigned positive number c, referred to as the

Nitis Mukhopadhyay; William Pepe

2009-01-01

262

Wavelet-based SAR images despeckling using joint hidden Markov model

NASA Astrophysics Data System (ADS)

In the past few years, wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the deficiency for taking account of intrascale correlations that exist among neighboring wavelet coefficients. In this paper, we propose to develop a joint hidden Markov model by fusing the wavelet Bayesian denoising technique with an image regularization procedure based on HMT and Markov random field (MRF). The Expectation Maximization algorithm is used to estimate hyperparameters and specify the mixture model. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. It is shown that the joint method outperforms lee filter and standard HMT techniques in terms of the integrative measure of the equivalent number of looks (ENL) and Pratt's figure of merit(FOM), especially when dealing with speckle noise in large variance.

Li, Qiaoliang; Wang, Guoyou; Liu, Jianguo; Chen, Shaobo

2007-11-01

263

Workplace air is monitored for overall dust levels and for specific components of the dust to determine compliance with occupational and workplace standards established by regulatory bodies for worker health protection. Exposure monitoring studies were conducted by the International Copper Association (ICA) at various industrial facilities around the world working with copper. Individual cascade impactor stages were weighed to determine the total amount of dust collected on the stage, and then the amounts of soluble and insoluble copper and other metals on each stage were determined; speciation was not determined. Filter samples were also collected for scanning electron microscope analysis. Retrospectively, there was an interest in obtaining estimates of alveolar lung burdens of copper in workers engaged in tasks requiring different levels of exertion as reflected by their minute ventilation. However, mechanistic lung dosimetry models estimate alveolar lung burdens based on particle Stoke's diameter. In order to use these dosimetry models the mass-based, aerodynamic diameter distribution (which was measured) had to be transformed into a distribution of Stoke's diameters, requiring an estimation be made of individual particle density. This density value was estimated by using cascade impactor data together with scanning electron microscopy data from filter samples. The developed method was applied to ICA monitoring data sets and then the multiple path particle dosimetry (MPPD) model was used to determine the copper alveolar lung burdens for workers with different functional residual capacities engaged in activities requiring a range of minute ventilation levels. PMID:24304308

Miller, Frederick J; Kaczmar, Swiatoslav W; Danzeisen, Ruth; Moss, Owen R

2013-12-01

264

NASA Technical Reports Server (NTRS)

A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

Garber, Donald P.

1993-01-01

265

Estimation of Density-Dependent Mortality of Juvenile Bivalves in the Wadden Sea

We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

2014-01-01

266

Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.

We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

2014-01-01

267

Estimating food portions. Influence of unit number, meal type and energy density????

Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7–26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled “meal”, 4 “drink” and 16 “snack”. Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p = 0.01). Single unit foods and those labelled as ‘meal’ or ‘beverage’ were estimated with greater error than multi-unit and ‘snack’ foods (p = 0.02 and p < 0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30–46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948

Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A.

2013-01-01

268

Identification of the monitoring point density needed to reliably estimate contaminant mass fluxes

NASA Astrophysics Data System (ADS)

Plume monitoring frequently relies on the evaluation of point-scale measurements of concentration at observation wells which are located at control planes or `fences' perpendicular to groundwater flow. Depth-specific concentration values are used to estimate the total mass flux of individual contaminants through the fence. Results of this approach, which is based on spatial interpolation, obviously depend on the density of the measurement points. Our contribution relates the accurracy of mass flux estimation to the point density and, in particular, allows to identify a minimum point density needed to achieve a specified accurracy. In order to establish this relationship, concentration data from fences installed in the coal tar creosote plume at the Borden site are used. These fences are characterized by a rather high density of about 7 points/m2 and it is reasonable to assume that the true mass flux is obtained with this point density. This mass flux is then compared with results for less dense grids down to about 0.1points/m2. Mass flux estimates obtained for this range of point densities are analyzed by the moving window method in order to reduce purely random fluctuations. For each position of the moving window the mass flux is estimated and the coefficient of variation (CV) is calculated to quantify variablity of the results. Thus, the CV provides a relative measure of accurracy in the estimated fluxes. By applying this approach to the Borden naphthalene plume at different times, it is found that the point density changes from sufficient to insufficient due to the temporally decreasing mass flux. By comparing the results of naphthalene and phenol at the same fence and at the same time, we can see that the same grid density might be sufficient for one compound but not for another. If a rather strict CV criterion of 5% is used, a grid of 7 points/m2 is shown to allow for reliable estimates of the true mass fluxes only in the beginning of plume development when mass fluxes are high. Long-term data exhibit a very high variation being attributed to the decreasing flux and a much denser grid would be required to reflect the decreasing mass flux with the same high accurracy. However, a less strict CV criterion of 50% may be acceptable due to uncertainties generally associated with other hydrogeologic parameters. In this case, a point density between 1 and 2 points/m2 is found to be sufficient for a set of five tested chemicals.

Liedl, R.; Liu, S.; Fraser, M.; Barker, J.

2005-12-01

269

NASA Astrophysics Data System (ADS)

Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

2014-03-01

270

Estimating whale density from their whistling activity: Example with St. Lawrence beluga

A passive acoustic method is developed to estimate whale density from their calling activity in a monitored area. The algorithm is applied to a loquacious species, the white whale (Delphinapterus leucas), in Saguenay fjord mouth near Tadoussac, Canada, which is severely affected by shipping noise. Beluga calls were recorded from cabled coastal hydrophones deployed in the basin while the animal

Y. Simard; N. Roy; S. Giard; C. Gervaise; M. Conversano; N. Ménard

2010-01-01

271

A wind energy analysis of Grenada: an estimation using the ‘Weibull’ density function

The Weibull density function has been used to estimate the wind energy potential in Grenada, West Indies. Based on historic recordings of mean hourly wind velocity this analysis shows the importance to incorporate the variation in wind energy potential during diurnal cycles. Wind energy assessments that are based on Weibull distribution using average daily\\/seasonal wind speeds fail to acknowledge that

D Weisser

2003-01-01

272

How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PMID:24885339

Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

2014-09-01

273

2 Audubon Mississippi, 1208 Washington Street, Vicksburg, MS 39183 Abstract. We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities

Scott G. Somershoe; Daniel J. Twedt; Bruce Reid

2006-01-01

274

Empirical Testing of Fast Kernel Density Estimation Algorithms Dustin Lang Mike Klaas

Empirical Testing of Fast Kernel Density Estimation Algorithms Dustin Lang Mike Klaas { dalang V6T1Z4 Nando de Freitas Abstract We present results of experiments testing the Fast Gauss Transform it in and forget about it." 2 FAST METHODS In this section, we briefly summarize each of the meth- ods that we test

de Freitas, Nando

275

Estimating the effect of Earth elasticity and variable water density on tsunami speeds

Estimating the effect of Earth elasticity and variable water density on tsunami speeds Victor C; revised 25 December 2012; accepted 7 January 2013; published 13 February 2013. [1] The speed of tsunami comparisons of tsunami arrival times from the 11 March 2011 tsunami suggest, however, that the standard

Tsai, Victor C.

276

Did the middle class shrink during the 1980s? UK evidence from kernel density estimates

This paper proposes using kernel density estimation methods to investigate the shrinking middle class hypothesis. The approach reveals striking new evidence of changes in the concentration of middle incomes in the United Kingdom during the 1980s. Breakdowns by family economic status demonstrate that a major cause of the aggregate changes was a moving apart of the income distributions for working

Stephen P. Jenkins

1995-01-01

277

A generalized single linkage method for estimating the cluster tree of a density

approach on several examples. Keywords: Cluster analysis, level set, single linkage clustering, excess massA generalized single linkage method for estimating the cluster tree of a density Werner Stuetzle the presence of distinct groups in a data set and assign group labels to the observations. Nonparametric

Washington at Seattle, University of

278

Technology Transfer Automated Retrieval System (TEKTRAN)

Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...

279

Brain tumor cell density estimation from multi-modal MR images based on a synthetic

Brain tumor cell density estimation from multi-modal MR images based on a synthetic tumor growth. Abstract. This paper proposes to employ a detailed tumor growth model to synthesize labelled images which can then be used to train an efficient data-driven machine learning tumor predictor. Our MR im- age

Prastawa, Marcel

280

ABSTRACT Reliable estimates of great ape abundance are needed to assess distribution, monitor population during a defined period. We compared orangutan densities calculated by the two methods using data from movement. To produce reliable results, the MNC method may require a similar amount of effort as the SCNC

281

Estimated number of women likely to benefit from bone mineral density measurement in France

Estimated number of women likely to benefit from bone mineral density measurement in France Nassira avenue Lacassagne, 69003, Lyon, France b GH PitiÃ©-SalepÃ©triÃ¨re, unitÃ© Inserm U 360, 75651, Paris cedex 13, France c Institut Gustave Roussy, unitÃ© Inserm XR 521, 94805, Villejuif cedex, France d UnitÃ© Inserm U

Paris-Sud XI, UniversitÃ© de

282

A PATIENT-SPECIFIC CORONARY DENSITY ESTIMATE R. Shahzad 1,2

with coronary density fields. The steps towards building these atlases, atlas selection, centreline mapping estimate using CTA atlas registration. The method is evaluated by quantifying the overlap of the obtained annotations for 170 CT datasets. Index Terms-- Calcium Score, coronary arteries, CT, CTA, atlas, image

van Vliet, Lucas J.

283

A hybrid approach to crowd density estimation using statistical leaning and texture classification

NASA Astrophysics Data System (ADS)

Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.

Li, Yin; Zhou, Bowen

2013-12-01

284

Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

Robel, G.L.; Fisher, W.L.

1999-01-01

285

Population density estimated from locations of individuals on a passive detector array

The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

2009-01-01

286

NASA Astrophysics Data System (ADS)

In this work, we investigate the statistical computation of the Boltzmann entropy of statistical samples. For this purpose, we use both histogram and kernel function to estimate the probability density function of statistical samples. We find that, due to coarse-graining, the entropy is a monotonic increasing function of the bin width for histogram or bandwidth for kernel estimation, which seems to be difficult to select an optimal bin width/bandwidth for computing the entropy. Fortunately, we notice that there exists a minimum of the first derivative of entropy for both histogram and kernel estimation, and this minimum point of the first derivative asymptotically points to the optimal bin width or bandwidth. We have verified these findings by large amounts of numerical experiments. Hence, we suggest that the minimum of the first derivative of entropy be used as a selector for the optimal bin width or bandwidth of density estimation. Moreover, the optimal bandwidth selected by the minimum of the first derivative of entropy is purely data-based, independent of the unknown underlying probability density distribution, which is obviously superior to the existing estimators. Our results are not restricted to one-dimensional, but can also be extended to multivariate cases. It should be emphasized, however, that we do not provide a robust mathematical proof of these findings, and we leave these issues with those who are interested in them.

Sui, Ning; Li, Min; He, Ping

2014-12-01

287

Density estimation of small-mammal populations using a trapping web and distance sampling methods

Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.

1983-01-01

288

A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

NASA Astrophysics Data System (ADS)

The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.

Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

2014-05-01

289

Reader Variability in Breast Density Estimation from Full-Field Digital Mammograms

Rationale and Objectives Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, “for processing”) or vendor postprocessed (ie, “for presentation”) digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual woman’s risk for breast cancer. Materials and Methods Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. Results The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Conclusion Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect. PMID:23465381

Keller, Brad M.; Nathan, Diane L.; Gavenonis, Sara C.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

2013-01-01

290

Calculation of absolute spectral densities via stochastic estimators of tr{?(E - ?)}

NASA Astrophysics Data System (ADS)

The calculation of absolute vibrational spectral densities, tr{?(E - ?)} , is investigated utilizing the stochastic trace estimator technique of Hutchinson. The spectral density is evaluated by a Monte Carlo scheme in which random vectors are sequentially sampled, their spectral density profiles computed and averaged. The requisite matrix elements of ?(E - ?) are evaluated using a Lanczos projection algorithm. The issue of distinguishing degenerate and replicated eigenvalues generated by the Lanczos algorithm is addressed and can be overcome using a recently-developed filter diagonalization scheme. The resulting method is simple, efficient and converges the density of states remarkably quickly for dense spectra. Illustrative calculations are presented for one- and two-dimensional test cases and finally for nitrogen dioxide in the energy range 0-12000 cm -1 using the V11 diabatic surface of Hirsch et al.

Jeffrey, Stephen J.; Smith, Sean C.

1997-10-01

291

Non-Gaussian probabilistic MEG source localisation based on kernel density estimation?

There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

2014-01-01

292

NASA Astrophysics Data System (ADS)

Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.

Erkyihun, S. T.

2013-12-01

293

Estimating density dependence in time-series of age-structured populations.

For a life history with age at maturity alpha, and stochasticity and density dependence in adult recruitment and mortality, we derive a linearized autoregressive equation with time-lags of from 1 to alpha years. Contrary to current interpretations, the coefficients for different time-lags in the autoregressive dynamics do not simply measure delayed density dependence, but also depend on life-history parameters. We define a new measure of total density dependence in a life history, D, as the negative elasticity of population growth rate per generation with respect to change in population size, D = - partial differential lnlambda(T)/partial differential lnN, where lambda is the asymptotic multiplicative growth rate per year, T is the generation time and N is adult population size. We show that D can be estimated from the sum of the autoregression coefficients. We estimated D in populations of six avian species for which life-history data and unusually long time-series of complete population censuses were available. Estimates of D were in the order of 1 or higher, indicating strong, statistically significant density dependence in four of the six species. PMID:12396510

Lande, R; Engen, S; Saether, B-E

2002-01-01

294

RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

2015-01-01

295

Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ? 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ? 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

2012-01-01

296

A note on estimating a non-increasing density in the presence of selection bias

In this paper we construct the non-parametric maximum likelihood estimator (NPMLE) f?n of a non-increasing probability density function f with distribution function F on the basis of a sample from a weighted distribution G with density given byg(x)=w(x)f(x)\\/?(f,w),where w(u)>0 for all u and ?(f,w)=?w(u)f(u)du

Hammou El Barmi; Paul I. Nelson

2002-01-01

297

Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

2014-01-01

298

NASA Astrophysics Data System (ADS)

Pyroclastic density current deposits remobilized by water during periods of heavy rainfall trigger lahars (volcanic mudflows) that affect inhabited areas at considerable distance from volcanoes, even years after an eruption. Here we present an innovative approach to detect and estimate the thickness and volume of pyroclastic density current (PDC) deposits as well as erosional versus depositional environments. We use SAR interferometry to compare an airborne digital surface model (DSM) acquired in 2004 to a post eruption 2010 DSM created using COSMO-SkyMed satellite data to estimate the volume of 2010 Merapi eruption PDC deposits along the Gendol river (Kali Gendol, KG). Results show PDC thicknesses of up to 75 m in canyons and a volume of about 40 × 106 m3, mainly along KG, and at distances of up to 16 km from the volcano summit. This volume estimate corresponds mainly to the 2010 pyroclastic deposits along the KG - material that is potentially available to produce lahars. Our volume estimate is approximately twice that estimated by field studies, a difference we consider acceptable given the uncertainties involved in both satellite- and field-based methods. Our technique can be used to rapidly evaluate volumes of PDC deposits at active volcanoes, in remote settings and where continuous activity may prevent field observations.

Bignami, Christian; Ruch, Joel; Chini, Marco; Neri, Marco; Buongiorno, Maria Fabrizia; Hidayati, Sri; Sayudi, Dewi Sri; Surono

2013-07-01

299

Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain Monte Carlo techniques, provides accurate approximations to estimate posterior marginals and avoid time-consuming sampling. We show here that two classical nonparametric smoothing problems, nonparametric regression and density estimation, can be achieved using INLA. Simulated examples and R functions are demonstrated to illustrate the use of the methods. Some discussions on potential applications of INLA are made in the paper. PMID:24416633

Wang, Xiao-Feng

2013-06-25

300

Estimation of D-region Electron Density using Tweeks Measurements at Nainital and Allahabad

Lightning generated radio atmospheric that propagates over long distances via multiple reflections through the boundaries of the Earth-ionosphere waveguide (EIWG), shows sharp dispersion near the cut-off frequency ~1.8 kHz of the EIWG. These dispersed atmospherics at lower frequency end are called as `tweek' radio atmospherics. In order to estimate D-region electron densities at the ionospheric reflection heights we have utilized

P. Pant; A. K. Maurya; Rajesh Singh; B. Veenadhari; A. K. Singh

2010-01-01

301

Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

2011-01-01

302

Road-Based Surveys for Estimating Wild Turkey Density in the Texas Rolling Plains

Line-transect-based distance sampling has been used to estimate density of several wild bird species including wild turkeys (Meleagris gallopavo). We used inflatable turkey decoys during autumn (Aug-Nov) and winter (Dec-Mar) 2003-2005 at 3 study sites in the Texas Rolling Plains, USA, to simulate Rio Grande wild turkey (M. g. intermedia) flocks. We evaluated detectability of flocks using logistic regression models.

MATTHEW J. BUTLER; WARREN B. BALLARD; MARK C. WALLACE; STEPHEN J. DEMASO

2007-01-01

303

Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals.

Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km² (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals. PMID:21166714

Kéry, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J Andrew

2011-04-01

304

NASA Astrophysics Data System (ADS)

The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

2013-05-01

305

NASA Astrophysics Data System (ADS)

Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

Rastigejev, Y.; Semakin, A. N.

2013-12-01

306

Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions. PMID:24746022

McCabe, Patrick; Korb, Oliver; Cole, Jason

2014-05-27

307

We demonstrate that although auditory sampling is a useful tool, this method alone will not provide a truly accurate indication of population size, density and distribution of gibbons in an area. If auditory sampling alone is employed, we show that data collection must take place over a sufficient period to account for variation in calling patterns across seasons. The population of Hylobates albibarbis in the Sabangau catchment, Central Kalimantan, Indonesia, was surveyed from July to December 2005 using methods established previously. In addition, auditory sampling was complemented by detailed behavioural data on six habituated groups within the study area. Here we compare results from this study to those of a 1-month study conducted in 2004. The total population of the Sabangau catchment is estimated to be about in the tens of thousands, though numbers, distribution and density for the different forest subtypes vary considerably. We propose that future density surveys of gibbons must include data from all forest subtypes where gibbons are found and that extrapolating from one forest subtype is likely to yield inaccurate density and population estimates. We also propose that auditory census be carried out by using at least three listening posts (LP) in order to increase the area sampled and the chances of hearing groups. Our results suggest that the Sabangau catchment contains one of the largest remaining contiguous populations of Bornean agile gibbon. PMID:17899314

Cheyne, Susan M; Thompson, Claire J H; Phillips, Abigail C; Hill, Robyn M C; Limin, Suwido H

2008-01-01

308

Density estimation and adaptive bandwidths: A primer for public health practitioners

Background Geographic information systems have advanced the ability to both visualize and analyze point data. While point-based maps can be aggregated to differing areal units and examined at varying resolutions, two problems arise 1) the modifiable areal unit problem and 2) any corresponding data must be available both at the scale of analysis and in the same geographic units. Kernel density estimation (KDE) produces a smooth, continuous surface where each location in the study area is assigned a density value irrespective of arbitrary administrative boundaries. We review KDE, and introduce the technique of utilizing an adaptive bandwidth to address the underlying heterogeneous population distributions common in public health research. Results The density of occurrences should not be interpreted without knowledge of the underlying population distribution. When the effect of the background population is successfully accounted for, differences in point patterns in similar population areas are more discernible; it is generally these variations that are of most interest. A static bandwidth KDE does not distinguish the spatial extents of interesting areas, nor does it expose patterns above and beyond those due to geographic variations in the density of the underlying population. An adaptive bandwidth method uses background population data to calculate a kernel of varying size for each individual case. This limits the influence of a single case to a small spatial extent where the population density is high as the bandwidth is small. If the primary concern is distance, a static bandwidth is preferable because it may be better to define the "neighborhood" or exposure risk based on distance. If the primary concern is differences in exposure across the population, a bandwidth adapting to the population is preferred. Conclusions Kernel density estimation is a useful way to consider exposure at any point within a spatial frame, irrespective of administrative boundaries. Utilization of an adaptive bandwidth may be particularly useful in comparing two similarly populated areas when studying health disparities or other issues comparing populations in public health. PMID:20653969

2010-01-01

309

Estimation of scattering phase function utilizing laser Doppler power density spectra.

A new method for the estimation of the light scattering phase function of particles is presented. The method allows us to measure the light scattering phase function of particles of any shape in the full angular range (0°-180°) and is based on the analysis of laser Doppler (LD) power density spectra. The theoretical background of the method and results of its validation using data from Monte Carlo simulations will be presented. For the estimation of the scattering phase function, a phantom measurement setup is proposed containing a LD measurement system and a simple model in which a liquid sample flows through a glass tube fixed in an optically turbid material. The scattering phase function estimation error was thoroughly investigated in relation to the light scattering anisotropy factor g. The error of g estimation is lower than 10% for anisotropy factors larger than 0.5 and decreases with increase of the anisotropy factor (e.g. for g = 0.98, the error of estimation is 0.01%). The analysis of influence of the noise in the measured LD spectrum showed that the g estimation error is lower than 1% for signal to noise ratio higher than 50 dB. PMID:23340453

Wojtkiewicz, S; Liebert, A; Rix, H; Sawosz, P; Maniewski, R

2013-02-21

310

NSDL National Science Digital Library

Students will explain the concept of and be able to calculate density based on given volumes and masses. Throughout today's assignment, you will need to calculate density. You can find a density calculator at this site. Make sure that you enter the correct units. For most of the problems, grams and cubic centimeters will lead you to the correct answer: Density Calculator What is Density? Visit the following website to answer questions ...

Mrs. Petersen

2013-10-28

311

Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ?3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

2014-01-01

312

Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L?1. The highest density observed was ?3 million zoospores L?1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

2014-01-01

313

We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

Crowe, D.E.; Longshore, K.M.

2010-01-01

314

NASA Astrophysics Data System (ADS)

Over 700 weekly-spaced vertical profiles of aerosol number density have been archived during 14-year period (October 1986-September 2000) using a bi-static Argon ion lidar system at the Indian Institute of Tropical Meteorology, Pune (18°43?N, 73°51?E, 559 m above mean sea level), India. The monthly resolved time series of aerosol distributions within the atmospheric boundary layer as well as at different altitudes aloft have been subjected to the wavelet-based spectral analysis to investigate different characteristic periodicities present in the long-term dataset. The solar radiometric aerosol optical depth (AOD) measurements over the same place during 1998-2003 have also been analyzed with the wavelet technique. Wavelet spectra of both the time series exhibited significant quasi-annual (around 12-14 months) and quasi-biennial (around 22-25 months) oscillations at statistically significant level. An overview on the lidar and radiometric data sets including the wavelet-based spectral analysis procedure is also presented. A brief statistical analysis concerning both annual and interannual variability of lidar and radiometer derived aerosol distributions has been performed to delineate the effect of different dominant seasons and associated meteorological conditions prevailing over the experimental site in Western India. Additionally, the impact of urbanization on the long-term trends in the lidar measurements of aerosol loadings over the experimental site is brought out. This was achieved by using the lidar observations and a preliminary data set built for inferring the urban aspects of the city of Pune, which included population, number of industries and vehicles etc. in the city.

Pal, S.; Devara, P. C. S.

2012-08-01

315

The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

Boersen, M.R.; Clark, J.D.; King, T.L.

2003-01-01

316

It is understood that the Hilbert transform pairs of orthonormal wavelet bases can only be realized approximately by the scaling filters of conjugate quadrature filter (CQF) banks. In this paper, the approximate FIR realization of the Hilbert transform pairs is formulated as an optimization problem in the sense of the lp (p=1, 2, or infinite) norm minimization on the approximate

Jiang Wang; Jian Qiu Zhang

2010-01-01

317

Paper #1052 Presented at the International Congress on Ultrasonics, Vienna, April 9 - 13, 2007, Session R05: Biomedical Ultrasound - 1 - Wavelet based deconvolution method in ultrasonic tomography, lasaygues@lma.cnrs-mrs.fr Abstract: This paper deals with the quantitative and qualitative ultrasonic

Paris-Sud XI, UniversitÃ© de

318

Extracting reliable image edge information is cru- cial for active contour models as well as vascular segmenta- tion in magnetic resonance angiography (MRA). However, conventional edge detection techniques, such as gradient- based methods and wavelet-based methods, are incapable of returning reliable detection responses from low contrast edges in the images. In this paper, we propose a novel edge detection method

Zhenyu He; Albert C. S. Chung

2010-01-01

319

The aim of the present work is to study the ionospheric response induced by the solar eclipse of August, the 11th, 1999. We provide Fourier and wavelet based characterisations of the propagation of the acoustic-gravity waves induced by the solar eclipse. The analysed data consist of profiles of electron concentration. They are derived from 1-minute vertical incidence ionospheric sounding measurements,

P. Sauli; P. Abry; J. Boska

2004-01-01

320

Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315

Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing

2014-01-01

321

NASA Technical Reports Server (NTRS)

Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

Matic, Roy M.; Mosley, Judith I.

1994-01-01

322

A Recursive Wavelet-based Strategy for Real-Time Cochlear Implant Speech Processing on PDA Platforms

This paper presents a wavelet-based speech coding strategy for cochlear implants. In addition, it describes the real-time implementation of this strategy on a PDA platform. Three wavelet packet decomposition tree structures are considered and their performance in terms of computational complexity, spectral leakage, fixed-point accuracy, and real-time processing are compared to other commonly used strategies in cochlear implants. A real-time mechanism is introduced for updating the wavelet coefficients recursively. It is shown that the proposed strategy achieves higher analysis rates than the existing strategies while being able to run in real-time on a PDA platform. In addition, it is shown that this strategy leads to a lower amount of spectral leakage. The PDA implementation is made interactive to allow users to easily manipulate the parameters involved and study their effects. PMID:20403778

Gopalakrishna, Vanishree; Kehtarnavaz, Nasser; Loizou, Philipos C.

2011-01-01

323

A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

NASA Astrophysics Data System (ADS)

This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

2014-09-01

324

Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

NASA Astrophysics Data System (ADS)

The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

2013-12-01

325

Wavelet series method for reconstruction and spectral estimation of laser Doppler velocimetry data

NASA Astrophysics Data System (ADS)

Many techniques have been developed in order to obtain spectral density function from randomly sampled data, such as the computation of a slotted autocovariance function. Nevertheless, one may be interested in obtaining more information from laser Doppler signals than a spectral content, using more or less complex computations that can be easily conducted with an evenly sampled signal. That is the reason why reconstructing an evenly sampled signal from the original LDV data is of interest. The ability of a wavelet-based technique to reconstruct the signal with respect to statistical properties of the original one is explored, and spectral content of the reconstructed signal is given and compared with estimated spectral density function obtained through classical slotting technique. Furthermore, LDV signals taken from a screeching jet are reconstructed in order to perform spectral and bispectral analysis, showing the ability of the technique in recovering accurate information's with only few LDV samples.

Jaunet, Vincent; Collin, Erwan; Bonnet, Jean-Paul

2012-01-01

326

NASA Astrophysics Data System (ADS)

A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (?15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

Waters, Daniel F.; Cadou, Christopher P.

2014-02-01

327

Estimation and monitoring of product aesthetics: application to manufacturing of \\

A new machine vision approach for quantitatively estimating and monitoring the appearance and aesthetics of manufactured products is presented. The approach is composed of three steps: (1) wavelet-based textural feature extraction from product images, (2) estimation of measures of the product appearance through subspace projection of the textural features, and (3) monitoring of the appearance in the latent variable subspace

J. Jay Liu; John F. MacGregor

2006-01-01

328

By varying the external electric field in density functional theory (DFT) calculations we have estimated the impact of the local electric field in the electric double layer on the oxygen reduction reaction (ORR). Potentially, including the local electric field could change adsorption energies and barriers substantially, thereby affecting the reaction mechanism predicted for ORR on different metals. To estimate the effect of local electric fields on ORR we combine the DFT results at various external electric field strengths with a previously developed model of electrochemical reactions which fully accounts for the effect of the electrode potential. We find that the local electric field only slightly affects the output of the model. Hence, the general picture obtained without inclusion of the electric field still persists. However, for accurate predictions at oxygen reduction potentials close to the volcano top local electric field effects may be of importance. PMID:17878993

Karlberg, G S; Rossmeisl, J; Nørskov, J K

2007-10-01

329

Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

NASA Technical Reports Server (NTRS)

Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

Simon, Dan; Simon, Donald L.

2006-01-01

330

This paper aims at estimating causal relationships between signals to detect flow propagation in autoregressive and physiological models. The main challenge of the ongoing work is to discover whether neural activity in a given structure of the brain influences activity in another area during epileptic seizures. This question refers to the concept of effective connectivity in neuroscience, i.e. to the identification of information flows and oriented propagation graphs. Past efforts to determine effective connectivity rooted to Wiener causality definition adapted in a practical form by Granger with autoregressive models. A number of studies argue against such a linear approach when nonlinear dynamics are suspected in the relationship between signals. Consequently, nonlinear nonparametric approaches, such as transfer entropy (TE), have been introduced to overcome linear methods limitations and promoted in many studies dealing with electrophysiological signals. Until now, even though many TE estimators have been developed, further improvement can be expected. In this paper, we investigate a new strategy by introducing an adaptive kernel density estimator to improve TE estimation. PMID:24110694

Zuo, Kai; Bellanger, Jean-Jacques; Yang, Chunfeng; Shu, Huazhong; Le Bouquin Jeannes, Regine

2013-01-01

331

NASA Astrophysics Data System (ADS)

This dissertation explores two topics pertinent to electromagnetic compatibility research: maximum crosstalk estimation in weakly coupled transmission lines and modeling of electromagnetic radiation resulting from printed circuit board/high-density connector interfaces. Despite an ample supply of literature devoted to the study of crosstalk, little research has been performed to formulate maximum crosstalk estimates when signal lines are electrically long. Paper one illustrates a new maximum crosstalk estimate that is based on a mathematically rigorous, integral formulation, where the transmission lines can be lossy and in an inhomogeneous media. Paper two provides a thorough comparison and analysis of the newly derived maximum crosstalk estimates with an estimate derived by another author. In paper two the newly derived estimates in paper one are shown to be more robust because they can estimate the maximum crosstalk with fewer and less restrictive assumptions. One current industry challenge is the lack of robust printed circuit board connector models and methods to quantify radiation from these connectors. To address this challenge, a method is presented in paper three to quantify electromagnetic radiation using network parameters and power conservation, assuming the only losses at a printed circuit board/connector interface are due to radiation. Some of the radiating structures are identified and the radiation physics explored for the studied connector in paper three. Paper four expands upon the radiation modeling concepts in paper three by extending radiation characterization when material losses and multiple signals may be present at the printed circuit board/connector interface. The resulting radiated power characterization method enables robust deterministic and statistical analyses of the radiated power from printed circuit board connectors. Paper five shows the development of a statistical radiated power estimate based on the radiation characterization method presented in paper four. Maximum radiated power estimates are shown using the Markov and Chebyshev inequalities to predict a radiated power limit. A few maximum radiated power limits are proposed that depend on the amount of known information about the radiation characteristics of a printed circuit board connector.

Halligan, Matthew Scott

332

Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID:23209547

Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.

2012-01-01

333

NSDL National Science Digital Library

This web page introduces the concepts of density and buoyancy. The discovery in ancient Greece by Archimedes is described. The densities of various materials are given and temperature effects introduced. Links are provided to news and other resources related to mass density. This is part of the Vision Learning collection of short online modules covering topics in a broad range of science and math topics.

Day, Martha Marie

334

We report on Transition Region And Coronal Explorer 171 A observations of the GOES X20 class flare on 2001 April 2 that shows EUV flare ribbons with intense diffraction patterns. Between the 11th to 14th order, the diffraction patterns of the compact flare ribbon are dispersed into two sources. The two sources are identified as emission from the Fe IX line at 171.1 A and the combined emission from Fe X lines at 174.5, 175.3, and 177.2 A. The prominent emission of the Fe IX line indicates that the EUV-emitting ribbon has a strong temperature component near the lower end of the 171 A temperature response ({approx}0.6-1.5 MK). Fitting the observation with an isothermal model, the derived temperature is around 0.65 MK. However, the low sensitivity of the 171 A filter to high-temperature plasma does not provide estimates of the emission measure for temperatures above {approx}1.5 MK. Using the derived temperature of 0.65 MK, the observed 171 A flux gives a density of the EUV ribbon of 3 x 10{sup 11} cm{sup -3}. This density is much lower than the density of the hard X-ray producing region ({approx}10{sup 13} to 10{sup 14} cm{sup -3}) suggesting that the EUV sources, though closely related spatially, lie at higher altitudes.

Krucker, Saem; Raftery, Claire L.; Hudson, Hugh S., E-mail: krucker@ssl.berkeley.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States)

2011-06-10

335

NASA Astrophysics Data System (ADS)

We apply the Delaunay Tessellation Field Estimator (DTFE) to reconstruct and analyse the matter distribution and cosmic velocity flows in the local Universe on the basis of the PSCz galaxy survey. The prime objective of this study is the production of optimal resolution 3D maps of the volume-weighted velocity and density fields throughout the nearby universe, the basis for a detailed study of the structure and dynamics of the cosmic web at each level probed by underlying galaxy sample. Fully volume-covering 3D maps of the density and (volume-weighted) velocity fields in the cosmic vicinity, out to a distance of 150h-1Mpc, are presented. Based on the Voronoi and Delaunay tessellation defined by the spatial galaxy sample, DTFE involves the estimate of density values on the basis of the volume of the related Delaunay tetrahedra and the subsequent use of the Delaunay tessellation as natural multidimensional (linear) interpolation grid for the corresponding density and velocity fields throughout the sample volume. The linearized model of the spatial galaxy distribution and the corresponding peculiar velocities of the PSCz galaxy sample, produced by Branchini et al., forms the input sample for the DTFE study. The DTFE maps reproduce the high-density supercluster regions in optimal detail, both their internal structure as well as their elongated or flattened shape. The corresponding velocity flows trace the bulk and shear flows marking the region extending from the Pisces-Perseus supercluster, via the Local Superclusters, towards the Hydra-Centaurus and the Shapley concentration. The most outstanding and unique feature of the DTFE maps is the sharply defined radial outflow regions in and around underdense voids, marking the dynamical importance of voids in the local Universe. The maximum expansion rate of voids defines a sharp cut-off in the DTFE velocity divergence probability distribution function. We found that on the basis of this cut-off DTFE manages to consistently reproduce the value of ?m ~ 0.35 underlying the linearized velocity data set.

Romano-Díaz, Emilio; van de Weygaert, Rien

2007-11-01

336

Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.

In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325

Dorazio, Robert M

2013-01-01

337

Bayes and Empirical Bayes Estimators of Abundance and Density from Spatial Capture-Recapture Data

In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325

Dorazio, Robert M.

2013-01-01

338

NASA Astrophysics Data System (ADS)

This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

2009-09-01

339

Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

Dorazio, Robert M.

2013-01-01

340

Comparison of breast percent density estimation from raw versus processed digital mammograms

NASA Astrophysics Data System (ADS)

We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

2011-03-01

341

In this report we summarize electron-cloud simulations for the RHIC dipole regions at injection and transition to estimate if scrubbing over practical time scales at injection would reduce the electron cloud density at transition to significantly lower values. The lower electron cloud density at transition will allow for an increase in the ion intensity.

He,P.; Blaskiewicz, M.; Fischer, W.

2009-01-02

342

3. Reductions in crown density were estimated in 5% classes by reference either to a standard set of

3. Reductions in crown density were estimated in 5% classes by reference either to a standard set in the geographical interpretation of results. THE 1997 RESULTS 5. The crown density results, using both methods in crown condition that have taken place since 1987 by recording the proportion of trees in which

343

Robust estimation of the self-similarity parameter in network traffic using wavelet transform

This article studies the problem of estimating the self-similarity parameter of network traffic traces. A robust wavelet- based procedure is proposed for this estimation task of deriving estimates that are less sensitive to some commonly encountered non-stationary traffic conditions, such as sudden level shifts and breaks. Two main ingredients of the proposed procedure are: (i) the application of a robust

Haipeng Shen; Zhengyuan Zhu; Thomas C. M. Lee

344

The (maximum) penalized-likelihood method of probability density estimation and bump-hunting is improved and exemplified by applications to scattering and chondrite data. We show how the hyperparameter in the method can be satisfactorily estimated by using statistics of goodness of fit. A Fourier expansion is found to be usually more expeditious than a Hermite expansion but a compromise is useful. The

I. J. Good; R. A. Gaskins

1980-01-01

345

Although a number of image classification approaches are available to estimate forest canopy density (FCD) using satellite data, assessment of their relative performances with tropical mixed deciduous vegetation is lacking. This study compared three image classification approaches – maximum likelihood classification (MLC), multiple linear regression (MLR) and FCD Mapper – in estimating the FCD of mixed deciduous forest in Myanmar.

Myat Su Mon; Nobuya Mizoue; Naing Zaw Htun; Tsuyoshi Kajisa; Shigejiro Yoshida

2011-01-01

346

Although a number of image classification approaches are available to estimate forest canopy density (FCD) using satellite data, assessment of their relative performances with tropical mixed deciduous vegetation is lacking. This study compared three image classification approaches – maximum likelihood classification (MLC), multiple linear regression (MLR) and FCD Mapper – in estimating the FCD of mixed deciduous forest in Myanmar.

Myat Su Mon; Nobuya Mizoue; Naing Zaw Htun; Tsuyoshi Kajisa; Shigejiro Yoshida

2012-01-01

347

In this paper, we study the local deviations of the empirical measure defined by the Kaplan-Meier (1958) estimator for the survival function. The results are applied to derive best rates of convergence for kernel estimators for the density and hazard rate function in the random censorship model.

Helmut Schafer

1986-01-01

348

Estuarine budget studies often suffer from uncertainties of net flux estimates in view of large temporal and spatial variabilities. Optimum spatial measurement density and material flux errors for a reasonably well mixed estuary were estimated by sampling 10 stations from surface to bottom simultaneously every hour for two tidal cycles in a 320-m-wide cross section in North Inlet, South Carolina.

Bjtirn Kjerfve; L. HAROLD STEVENSON; JEFFREY A. PROEHL; THOMAS H. CHRZANOWSKI; WILEY M. KITCHENS

1981-01-01

349

Several studies have attempted to compare subtidal animal population estimates obtained in a variety of ways using SCUBA diving and have reported a lot of variation between the estimates obtained. This study investigated individually scale-, tidal-, equipment- and observer-induced variation through analysis of animal population density indices obtained using a number of techniques based on SCUBA diver visual survey. The

MDJ Sayer; C Poonian

2007-01-01

350

NSDL National Science Digital Library

Targeting a middle and high school population, this web page has an introduction to the concept of density. It is an appendix of a larger site called, MathMol (Mathematics and Molecules), designed as an introduction to molecular modeling.

351

Estimation of effective scatterer size and number density in near-infrared tomography

NASA Astrophysics Data System (ADS)

Light scattering from tissue originates from the fluctuations in intra-cellular and extra-cellular components, so it is possible that macroscopic scattering spectroscopy could be used to quantify sub-microscopic structures. Both electron microscopy (EM) and optical phase contrast microscopy were used to study the origin of scattering from tissue. EM studies indicate that lipid-bound particle sizes appear to be distributed as a monotonic exponential function, with sub-micron structures dominating the distribution. Given assumptions about the index of refraction change, the shape of the scattering spectrum in the near infrared as measured through bulk tissue is consistent with what would be predicted by Mie theory with these particle size histograms. The relative scattering intensity of breast tissue sections (including 10 normal & 23 abnormal) were studied by phase contrast microscopy. Results show that stroma has higher scattering than epithelium tissue, and fat has the lowest values; tumor epithelium has lower scattering than the normal epithelium; stroma associated with tumor has lower scattering than the normal stroma. Mie theory estimation scattering spectra, was used to estimate effective particle size values, and this was applied retrospectively to normal whole breast spectra accumulated in ongoing clinical exams. The effective sizes ranged between 20 and 1400 nm, which are consistent with subcellular organelles and collagen matrix fibrils discussed previously. This estimation method was also applied to images from cancer regions, with results indicating that the effective scatterer sizes of region of interest (ROI) are pretty close to that of the background for both the cancer patients and benign patients; for the effective number density, there is a big difference between the ROI and background for the cancer patients, while for the benign patients, the value of ROI are relatively close to that of the background. Ongoing MRI-guided NIR studies indicated that the fibroglandular tissue had smaller effective scatterer size and larger effective number density than the adipose tissue. The studies in this thesis provide an interpretive approach to estimate average morphological scatter parameters of bulk tissue, through interpretation of diffuse scattering as coming from effective Mie scatterers.

Wang, Xin

2007-05-01

352

Wavelet-based neural network prediction of plasma etch profile nonuniformity

Profiles of plasma etching have conventionally been characterized by approximating the slope with an angle or anisotropy. This is critically limited in that detailed variations on the profile surface are inevitably neglected. In current high density plasma etching, this becomes more serious since unexpected microfeatures such as bowing or microtrenching are frequently formed along the profile surface.

B. Kim; S. Kim; K. Kim

2003-01-01

353

The genomic RNA of hepatitis C virus (HCV) in the plasma of volunteer blood donors was detected by using the polymerase chain reaction in a fraction of density 1-08 g\\/ml from sucrose density gradient equilibrium centrifugation. When the fraction was treated with the detergent NP40 and recentrifuged in sucrose, the HCV RNA banded at 1.25g\\/ml. Assuming that NP40 removed a

Hideaki Miyamoto; Hiroaki Okamoto; Koei Sato; Takeshi Tanaka; Shunji Mishiro

1992-01-01

354

Statistical estimation of femur micro-architecture using optimal shape and density predictors.

The personalization of trabecular micro-architecture has been recently shown to be important in patient-specific biomechanical models of the femur. However, high-resolution in vivo imaging of bone micro-architecture using existing modalities is still infeasible in practice due to the associated acquisition times, costs, and X-ray radiation exposure. In this study, we describe a statistical approach for the prediction of the femur micro-architecture based on the more easily extracted subject-specific bone shape and mineral density information. To this end, a training sample of ex vivo micro-CT images is used to learn the existing statistical relationships within the low and high resolution image data. More specifically, optimal bone shape and mineral density features are selected based on their predictive power and used within a partial least square regression model to estimate the unknown trabecular micro-architecture within the anatomical models of new subjects. The experimental results demonstrate the accuracy of the proposed approach, with average errors of 0.07 for both the degree of anisotropy and tensor norms. PMID:25624314

Lekadir, Karim; Hazrati-Marangalou, Javad; Hoogendoorn, Corné; Taylor, Zeike; van Rietbergen, Bert; Frangi, Alejandro F

2015-02-26

355

Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion

NASA Technical Reports Server (NTRS)

Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.

Wilson, L.; Self, S.

1980-01-01

356

Image inpainting using wavelet-based inter- and intra-scale dependency

Image inpainting or completion is a technique to restore a damaged image. Recently various approaches have been proposed. Wavelet transform has been used for various image analysis problems due to its nice multi-resolution properties and decoupling characteristics. We propose to utilize the advantages of wavelet transforms for image inpainting. Unlike other inpainting algorithms, we can expect better global structure estimation

Dongwook Cho; Tien D. Bui

2008-01-01

357

Wavelet-based Analysis of Wavelike Structures in the Ionospheric F-Region Electron Concentration

The present work provides a contribution to the study of short-term variabilities (from 15 minutes to 4 hours) observed in F region of ionosphere and due to acoustic-gravity waves (AGW). To this end, electron densities are measured in Pruhonice observatory (49.9N, 14.5E - vertical ionospheric sounding with repetition time 5 minutes and 1 minute). From data, collected during several campaigns

P. Sauli; P. Abry; J. Boska

2002-01-01

358

Estimating basin thickness using a high-density passive-source geophone array

NASA Astrophysics Data System (ADS)

In 2010 an array of 834 single-component geophones was deployed across the Bighorn Mountain Range in northern Wyoming as part of the Bighorn Arch Seismic Experiment (BASE). The goal of this deployment was to test the capabilities of these instruments as recorders of passive-source observations in addition to active-source observations for which they are typically used. The results are quite promising, having recorded 47 regional and teleseismic earthquakes over a two-week deployment. These events ranged from magnitude 4.1 to 7.0 (mb) and occurred at distances up to 10°. Because these instruments were deployed at ca. 1000 m spacing we were able to resolve the geometries of two major basins from the residuals of several well-recorded teleseisms. The residuals of these arrivals, converted to basinal thickness, show a distinct westward thickening in the Bighorn Basin that agrees with industry-derived basement depth information. Our estimates of thickness in the Powder River Basin do not match industry estimates in certain areas, likely due to localized high-velocity features that are not included in our models. Thus, with a few cautions, it is clear that high-density single-component passive arrays can provide valuable constraints on basinal geometries, and could be especially useful where basinal geometry is poorly known.

O'Rourke, C. T.; Sheehan, A. F.; Erslev, E. A.; Miller, K. C.

2014-09-01

359

NASA Astrophysics Data System (ADS)

Spectral estimation of irregularly sampled velocity data issued from Laser Doppler Anemometry measurements is considered in this paper. A new method is proposed based on linear interpolation followed by a deconvolution procedure. In this method, the analytic expression of the autocorrelation function of the interpolated data is expressed as a linear function of the autocorrelation function of the data to be estimated. For the analysis of both simulated and experimental data, the results of the proposed method is compared with the one of the reference methods in LDA: refinement of autocorrelation function of sample-and-hold interpolated signal method given by Nobach et al. (Exp Fluids 24:499-509, 1998), refinement of power spectral density of sample-and-hold interpolated signal method given by Simon and Fitzpatrick (Exp Fluids 37:272-280, 2004) and fuzzy slotting technique with local normalization and weighting algorithm given by Nobach (Exp Fluids 32:337-345, 2002). Based on these results, it is concluded that the performances of the proposed method are better than the one of the other methods, especially for what concerns bias and variance.

Moreau, S.; Plantier, G.; Valière, J.-C.; Bailliet, H.; Simon, L.

2011-01-01

360

Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

NASA Technical Reports Server (NTRS)

This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

2010-01-01

361

Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation. PMID:24531728

Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

2014-04-01

362

Background Runs of homozygosity are long, uninterrupted stretches of homozygous genotypes that enable reliable estimation of levels of inbreeding (i.e., autozygosity) based on high-throughput, chip-based single nucleotide polymorphism (SNP) genotypes. While the theoretical definition of runs of homozygosity is straightforward, their empirical identification depends on the type of SNP chip used to obtain the data and on a number of factors, including the number of heterozygous calls allowed to account for genotyping errors. We analyzed how SNP chip density and genotyping errors affect estimates of autozygosity based on runs of homozygosity in three cattle populations, using genotype data from an SNP chip with 777 972 SNPs and a 50 k chip. Results Data from the 50 k chip led to overestimation of the number of runs of homozygosity that are shorter than 4 Mb, since the analysis could not identify heterozygous SNPs that were present on the denser chip. Conversely, data from the denser chip led to underestimation of the number of runs of homozygosity that were longer than 8 Mb, unless the presence of a small number of heterozygous SNP genotypes was allowed within a run of homozygosity. Conclusions We have shown that SNP chip density and genotyping errors introduce patterns of bias in the estimation of autozygosity based on runs of homozygosity. SNP chips with 50 000 to 60 000 markers are frequently available for livestock species and their information leads to a conservative prediction of autozygosity from runs of homozygosity longer than 4 Mb. Not allowing heterozygous SNP genotypes to be present in a homozygosity run, as has been advocated for human populations, is not adequate for livestock populations because they have much higher levels of autozygosity and therefore longer runs of homozygosity. When allowing a small number of heterozygous calls, current software does not differentiate between situations where these calls are adjacent and therefore indicative of an actual break of the run versus those where they are scattered across the length of the homozygous segment. Simple graphical tests that are used in this paper are a current, yet tedious solution. PMID:24168655

2013-01-01

363

On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

NASA Astrophysics Data System (ADS)

We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions for which there can be uniform resolvent estimates for , , where is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from to the spectrum of . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for . Ours is based on the related Hadamard parametrices for , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

2015-02-01

364

The designs of application specific integrated circuits and\\/or multiprocessor systems are usually required in order to improve the performance of multidimensional applications such as digital-image processing and computer vision. Wavelet-based algorithms have been found promising among these applications due to the features of hierarchical signal analysis and multiresolution analysis. Because of the large size of multidimensional input data, off-chip random

Dongming Peng; Mi Lu

2005-01-01

365

Long-range dependence in the volatility of commodity futures prices: Wavelet-based evidence

NASA Astrophysics Data System (ADS)

Commodity futures have long been used to facilitate risk management and inventory stabilization. The study of commodity futures prices has attracted much attention in the literature because they are highly volatile and because commodities represent a large proportion of the export value in many developing countries. Previous research has found apparently contradictory findings about the presence of long memory or more generally, long-range dependence. This note investigates the nature of long-range dependence in the volatility of 14 energy and agricultural commodity futures price series using the improved Hurst coefficient ( H) estimator of Abry, Teyssière and Veitch. This estimator is motivated by the ability of wavelets to detect self-similarity and also enables a test for the stability of H. The results show evidence of long-range dependence for all 14 commodities and of a non-stationary H for 9 of 14 commodities.

Power, Gabriel J.; Turvey, Calum G.

2010-01-01

366

NASA Technical Reports Server (NTRS)

The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

2001-01-01

367

NASA Astrophysics Data System (ADS)

A new method for designing two-channel causal stable IIR PR filter banks and wavelet bases is proposed. It is based on the structure previously proposed by Phoong et al. (1995). Such a filter bank is parameterized by two functions (alpha) (z) and (beta) (z), which can be chosen as an all-pass function to obtain IIR filterbanks with very high stopband attenuation. One of the problems with this choice is that a bump of about 4 dB always exists near the transition band of the analysis and synthesis filters. The stopband attenuation of the high-pass analysis filter is also 10 dB lower than that of the low-pass filter. By choosing (beta) (z) and (alpha) (z) as an all-pass function and a type-II linear- phase finite impulse response function, respectively, the bumping can be significantly suppressed. In addition, the stopband attenuation of the high-pass filter can be controlled easily. The design problem is formulated as a polynomial approximation problem and is solved efficiently by the Remez exchange algorithm. The extension of this method to the design of a class of IIR wavelet basis is also considered.

Mao, J. S.; Chan, S. C.; Ho, Ka L.

2000-10-01

368

EMPIRICAL MODE DECOMPOSITION, FRACTIONAL GAUSSIAN NOISE AND HURST EXPONENT ESTIMATION

analysis and statisti- cal characterization of the obtained modes reveal an equivalent filter bank- fulness of this technique for estimating scaling exponents. New EMD-based methods are proposed and quantitatively compared to classical wavelet-based ones. 2. EMD BASICS Basically, Empirical Mode Decomposition

Gonçalves, Paulo

369

The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

2002-01-01

370

X-Ray Methods to Estimate Breast Density Content in Breast Tissue

NASA Astrophysics Data System (ADS)

This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ? fibrous, lucite ? cancer and polyet ? fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating techniques and one photon counting technique. Contrast, signal to noise ratio (SNR) and pixel values between different regions of interest were analyzed. The two basis function method and two of the energy integrating methods (calibration, beam hardening correction) gave the highest and more linear curves for contrast and SNR.

Maraghechi, Borna

371

Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

1982-12-16

372

Wavelet based error correction and predictive uncertainty of a hydrological forecasting system

NASA Astrophysics Data System (ADS)

River discharge predictions most often show errors with scaling properties of unknown source and statistical structure that degrade the quality of forecasts. This is especially true for lead-time ranges greater then a few days. Since the European Flood Alert System (EFAS) provides discharge forecasts up to ten days ahead, it is necessary to take these scaling properties into consideration. For example the range of scales for the error that occurs at the spring time will be caused by long lasting snowmelt processes, and is by far larger then the error, that appears during the summer period and is caused by convective rain fields of short duration. The wavelet decomposition is an excellent way to provide the detailed model error at different levels in order to estimate the (unobserved) state variables more precisely. A Vector-AutoRegressive model with eXogenous input (VARX) is fitted for the different levels of wavelet decomposition simultaneously and after predicting the next time steps ahead for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The Bayesian Uncertainty Processor (BUP) developed by Krzysztofowicz is an efficient method to estimate the full predictive uncertainty, which is derived by integrating the hydrological model uncertainty and the meteorological input uncertainty. A hydrological uncertainty processor has been applied to the error corrected discharge series at first in order to derive the predictive conditional distribution under the hypothesis that there is no input uncertainty. The uncertainty of the forecasted meteorological input forcing the hydrological model is derived from the combination of deterministic weather forecasts and ensemble predictions systems (EPS) and the Input Processor maps this input uncertainty into the output uncertainty under the hypothesis that there is no hydrological uncertainty. The main objective of this Bayesian forecasting system is to get an estimate of the conditional probability distribution of the future observed quantity (i.e. the discharge in the next days) given the available sample of model predictions by integrating optimally the hydrological and the input uncertainty. At the moment this integrated system of error correction and predictive uncertainty estimation has been tested and set up for operational use at some stations in Central Europe only, but will be extended to the EFAS domain within the near future.

Bogner, Konrad; Pappenberger, Florian; Thielen, Jutta; de Roo, Ad

2010-05-01

373

Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

NASA Technical Reports Server (NTRS)

This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.

Grau, David

2012-01-01

374

NSDL National Science Digital Library

This PheT interactive, downloadable simulation allows students toDiscover the relationship between mass, volume and density by weighing and submerging various materials under water. Do objects like aluminum, Styrofoam, and wood float or sink? Can you identify all the mystery objects by weighing them and submerging them underwater to measure their volumes? Sample earning goals, teaching ideas, and translated versions are available.

2008-01-01

375

Lattice potential energy estimation for complex ionic salts from density measurements.

This paper is one of a series exploring simple approaches for the estimation of lattice energy of ionic materials, avoiding elaborate computation. The readily accessible, frequently reported, and easily measurable (requiring only small quantities of inorganic material) property of density, rho(m), is related, as a rectilinear function of the form (rho(m)/M(m))(1/3), to the lattice energy U(POT) of ionic materials, where M(m) is the chemical formula mass. Dependence on the cube root is particularly advantageous because this considerably lowers the effects of any experimental errors in the density measurement used. The relationship that is developed arises from the dependence (previously reported in Jenkins, H. D. B.; Roobottom, H. K.; Passmore, J.; Glasser, L. Inorg. Chem. 1999, 38, 3609) of lattice energy on the inverse cube root of the molar volume. These latest equations have the form U(POT)/kJ mol(-1) = gamma(rho(m)/M(m))(1/3) + delta, where for the simpler salts (i.e., U(POT)/kJ mol(-1) < 5000 kJ mol(-1)), gamma and delta are coefficients dependent upon the stoichiometry of the inorganic material, and for materials for which U(POT)/kJ mol(-1) > 5000, gamma/kJ mol(-1) cm = 10(-7) AI(2IN(A))(1/3) and delta/kJ mol(-1) = 0 where A is the general electrostatic conversion factor (A = 121.4 kJ mol(-1)), I is the ionic strength = 1/2 the sum of n(i)z(i)(2), and N(A) is Avogadro's constant. PMID:11978099

Jenkins, H Donald Brooke; Tudela, David; Glasser, Leslie

2002-05-01

376

Density estimators for the convolution of discrete and continuous random variables

be estimated by a kernel estimator based on the sum of each pair of observations. Since the two components of a kernel estimator of the continuous component with an empirical estimator of the discrete component. We estimator, and the same asymptotic bias, but a much smaller asymptotic variance. We also show how pointwise

Wefelmeyer, Wolfgang

377

NASA Astrophysics Data System (ADS)

Reliability of microseismic interpretations is very much dependent on how robustly microseismic events are detected and picked. Various event detection algorithms are available but detection of weak events is a common challenge. Apart from the event magnitude, hypocentral distance, and background noise level, the instrument self-noise can also act as a major constraint for the detection of weak microseismic events in particular for borehole deployments in quiet environments such as below 1.5-2 km depths. Instrument self-noise levels that are comparable or above background noise levels may not only complicate detection of weak events at larger distances but also challenge methods such as seismic interferometry which aim at analysis of coherent features in ambient noise wavefields to reveal subsurface structure. In this paper, we use power spectral densities to estimate the instrument self-noise for a borehole data set acquired during a hydraulic fracturing stimulation using modified 4.5-Hz geophones. We analyse temporal changes in recorded noise levels and their time-frequency variations for borehole and surface sensors and conclude that instrument noise is a limiting factor in the borehole setting, impeding successful event detection. Next we suggest that the variations of the spectral powers in a time-frequency representation can be used as a new criterion for event detection. Compared to the common short-time average/long-time average method, our suggested approach requires a similar number of parameters but with more flexibility in their choice. It detects small events with anomalous spectral powers with respect to an estimated background noise spectrum with the added advantage that no bandpass filtering is required prior to event detection.

Vaezi, Y.; van der Baan, M.

2014-05-01

378

A method for estimating the cholesterol content of the serum low-density lipoprotein fraction (Sf- 0.20)is presented. The method involves measure- ments of fasting plasma total cholesterol, tri- glyceride, and high-density lipoprotein cholesterol concentrations, none of which requires the use of the preparative ultracentrifuge. Cornparison of this suggested procedure with the more direct procedure, in which the ultracentrifuge is used, yielded

William T. Friedewald; Robert I. Levy; Donald S. Fredrickson

1972-01-01

379

Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

Trolle, M.; Kery, M.

2003-01-01

380

Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.

2004-01-01

381

Novelty detection by multivariate kernel density estimation and growing neural gas algorithm

NASA Astrophysics Data System (ADS)

One of the underlying assumptions when using data-based methods for pattern recognition in diagnostics or prognostics is that the selected data sample used to train and test the algorithm is representative of the entire dataset and covers all combinations of parameters and conditions, and resulting system states. However in practice, operating and environmental conditions may change, unexpected and previously unanticipated events may occur and corresponding new anomalous patterns develop. Therefore for practical applications, techniques are required to detect novelties in patterns and give confidence to the user on the validity of the performed diagnosis and predictions. In this paper, the application of two types of novelty detection approaches is compared: a statistical approach based on multivariate kernel density estimation and an approach based on a type of unsupervised artificial neural network, called the growing neural gas (GNG). The comparison is performed on a case study in the field of railway turnout systems. Both approaches demonstrate their suitability for detecting novel patterns. Furthermore, GNG proves to be more flexible, especially with respect to dimensionality of the input data and suitability for online learning.

Fink, Olga; Zio, Enrico; Weidmann, Ulrich

2015-01-01

382

Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

2011-05-15

383

NASA Astrophysics Data System (ADS)

Current fluorescence diffuse optical tomography (fDOT) systems can provide large data sets and, in addition, the unknown parameters to be estimated are so numerous that the sensitivity matrix is too large to store. Alternatively, iterative methods can be used, but they can be extremely slow at converging when dealing with large matrices. A few approaches suitable for the reconstruction of images from very large data sets have been developed. However, they either require explicit construction of the sensitivity matrix, suffer from slow computation times, or can only be applied to restricted geometries. We introduce a method for fast reconstruction in fDOT with large data and solution spaces, which preserves the resolution of the forward operator whilst compressing its representation. The method does not require construction of the full matrix, and thus allows storage and direct inversion of the explicitly constructed compressed system matrix. The method is tested using simulated and experimental data. Results show that the fDOT image reconstruction problem can be effectively compressed without significant loss of information and with the added advantage of reducing image noise.

Correia, Teresa; Rudge, Timothy; Koch, Maximilian; Ntziachristos, Vasilis; Arridge, Simon

2013-08-01

384

NASA Technical Reports Server (NTRS)

The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

Sjoegreen, B.; Yee, H. C.

2001-01-01

385

JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...

386

through sophisticated flight dynamics [1]. However, for the Air Traffic Control Sys- tem Command Center at an Air Route Traffic Control Center (simply denoted as Center hereafter) level [2]. It forecasts aircraftAn Air Traffic Prediction Model based on Kernel Density Estimation Yi Cao,1 Lingsong Zhang,2

Sun, Dengfeng

387

Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

NASA Astrophysics Data System (ADS)

Plants require solar radiation for photosynthesis and their growth is directly related to the amount received, assuming that other environmental parameters are not limiting. Therefore, precise estimation of photosynthetically active radiation (PAR) is necessary to enhance overall accuracies of plant growth models. This study aimed to explore the PAR radiant flux in the San Francisco Bay Area of northern California. During the growing season (March through August) for 2 years 2007-2008, the on-site magnitudes of photosynthetic photon flux densities (PPFD) were investigated and then processed at both the hourly and daily time scales. Combined with global solar radiation ( R S) and simulated extraterrestrial solar radiation, five PAR-related values were developed, i.e., flux density-based PAR (PPFD), energy-based PAR (PARE), from-flux-to-energy conversion efficiency (fFEC), and the fraction of PAR energy in the global solar radiation (fE), and a new developed indicator—lost PARE percentages (LPR)—when solar radiation penetrates from the extraterrestrial system to the ground. These PAR-related values indicated significant diurnal variation, high values occurring at midday, with the low values occurring in the morning and afternoon hours. During the entire experimental season, the overall mean hourly value of fFEC was found to be 2.17 ?mol J-1, while the respective fE value was 0.49. The monthly averages of hourly fFEC and fE at the solar noon time ranged from 2.15 in March to 2.39 ?mol J-1 in August and from 0.47 in March to 0.52 in July, respectively. However, the monthly average daily values were relatively constant, and they exhibited a weak seasonal variation, ranging from 2.02 mol MJ-1 and 0.45 (March) to 2.19 mol MJ-1 and 0.48 (June). The mean daily values of fFEC and fE at the solar noon were 2.16 mol MJ-1 and 0.47 across the entire growing season, respectively. Both PPFD and the ever first reported LPR showed strong diurnal patterns. However, they had opposite trends. PPFD was high around noon, resulting in low values of LPR during the same time period. Both were found to be highly correlated with global solar radiation R S, solar elevation angle h, and the clearness index K t. Using the best subset selection of variables, two parametric models were developed for estimating PPFD and LPR, which can easily be applied in radiometric sites, by recording only global solar radiation measurements. These two models were found to be involved with the most commonly measured global solar radiation ( R S) and two large-scale geometric parameters, i.e., extraterrestrial solar radiation and solar elevation. The models were therefore insensitive to local weather conditions such as temperature. In particular, with two test data sets collected in USA and Greece, it was verified that the models could be extended across different geographical areas, where they performed well. Therefore, these two hourly based models can be used to provide precise PAR-related values, such as those required for developing precise vegetation growth models.

Ge, Shaokui; Smith, Richard G.; Jacovides, Constantinos P.; Kramer, Marc G.; Carruthers, Raymond I.

2011-08-01

388

Estimation of tool pose based on force-density correlation during robotic drilling.

The application of image-guided systems with or without support by surgical robots relies on the accuracy of the navigation process, including patient-to-image registration. The surgeon must carry out the procedure based on the information provided by the navigation system, usually without being able to verify its correctness beyond visual inspection. Misleading surrogate parameters such as the fiducial registration error are often used to describe the success of the registration process, while a lack of methods describing the effects of navigation errors, such as those caused by tracking or calibration, may prevent the application of image guidance in certain accuracy-critical interventions. During minimally invasive mastoidectomy for cochlear implantation, a direct tunnel is drilled from the outside of the mastoid to a target on the cochlea based on registration using landmarks solely on the surface of the skull. Using this methodology, it is impossible to detect if the drill is advancing in the correct direction and that injury of the facial nerve will be avoided. To overcome this problem, a tool localization method based on drilling process information is proposed. The algorithm estimates the pose of a robot-guided surgical tool during a drilling task based on the correlation of the observed axial drilling force and the heterogeneous bone density in the mastoid extracted from 3-D image data. We present here one possible implementation of this method tested on ten tunnels drilled into three human cadaver specimens where an average tool localization accuracy of 0.29 mm was observed. PMID:23269744

Williamson, Tom M; Bell, Brett J; Gerber, Nicolas; Salas, Lilibeth; Zysset, Philippe; Caversaccio, Marco; Weber, Stefan

2013-04-01

389

Global Crust-Mantle Density Contrast Estimated from EGM2008, DTM2008, CRUST2.0, and ICE-5G

NASA Astrophysics Data System (ADS)

We compute globally the consolidated crust-stripped gravity disturbances/anomalies. These refined gravity field quantities are obtained from the EGM2008 gravity data after applying the topographic and crust density contrasts stripping corrections computed using the global topography/bathymetry model DTM2006.0, the global continental ice-thickness data ICE-5G, and the global crustal model CRUST2.0. All crust components density contrasts are defined relative to the reference crustal density of 2,670 kg/m3. We demonstrate that the consolidated crust-stripped gravity data have the strongest correlation with the crustal thickness. Therefore, they are the most suitable gravity data type for the recovery of the Moho density interface by means of the gravimetric modelling or inversion. The consolidated crust-stripped gravity data and the CRUST2.0 crust-thickness data are used to estimate the global average value of the crust-mantle density contrast. This is done by minimising the correlation between these refined gravity and crust-thickness data by adding the crust-mantle density contrast to the original reference crustal density of 2,670 kg/m3. The estimated values of 485 kg/m3 (for the refined gravity disturbances) and 481 kg/m3 (for the refined gravity anomalies) very closely agree with the value of the crust-mantle density contrast of 480 kg/m3, which is adopted in the definition of the Preliminary Reference Earth Model (PREM). This agreement is more likely due to the fact that our results of the gravimetric forward modelling are significantly constrained by the CRUST2.0 model density structure and crust-thickness data derived purely based on methods of seismic refraction.

Tenzer, Robert; Hamayun; Novák, Pavel; Gladkikh, Vladislav; Vajda, Peter

2012-09-01

390

This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. PMID:20227706

Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien

2010-04-23

391

NASA Astrophysics Data System (ADS)

In this study, we estimate coronal electron density distributions by analyzing DH type II radio observations based on the assumption: a DH type II radio burst is generated by the shock formed at a CME leading edge. For this, we consider 11 Wind/WAVES DH type II radio bursts (from 2000 to 2003 and from 2010 to 2012) associated with SOHO/LASCO limb CMEs using the following criteria: (1) the fundamental and second harmonic emission lanes are well identified in the frequency range of 1 to 14 MHz; (2) its associated CME is clearly identified at least twice in the LASCO-C2 or C3 field of view during the time of type II observation. For these events, we determine the lowest frequencies of their fundamental emission lanes and the heights of their leading edges. Coronal electron density distributions are obtained by minimizing the root mean square error between the observed heights of CME leading edges and the heights of DH type II radio bursts from assumed electron density distributions. We find that the estimated coronal electron density distribution range from 2.5 to 10.2-fold Saito’s coronal electron density models.

Lee, Jae-Ok; Moon, Yong-Jae; Lee, Jin-Yi; Lee, Kyoung-Sun; Kim, Rok-Soon

2015-04-01

392

Sensitivity analysis and density estimation for finite-time ruin probabilities

due to new solvency regulations in Europe. This problem is closely related to that of density-continuous) density functions of infima of reserve processes commonly used in insurance. In particular we show, using words: Ruin probability, Malliavin calculus, integration by parts, insurance mathematics. MSC

Privault, Nicolas

393

NASA Technical Reports Server (NTRS)

To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.

Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

1995-01-01

394

This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

Subramanian, Sundarraman

2006-01-01

395

A field comparison of nested grid and trapping web density estimators

The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.

Jett, D.A.; Nichols, J.D.

1987-01-01

396

NASA Astrophysics Data System (ADS)

Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

2011-12-01

397

Power spectral density estimation by spline smoothing in the frequency domain

NASA Technical Reports Server (NTRS)

An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.

Defigueiredo, R. J. P.; Thompson, J. R.

1972-01-01

398

Power spectral density estimation by spline smoothing in the frequency domain.

NASA Technical Reports Server (NTRS)

An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-

De Figueiredo, R. J. P.; Thompson, J. R.

1972-01-01

399

Stochastic particle models are the state-of-science method for modelling atmospheric dispersion. They simulate the released pollutant by a large number of particles. In most particle models the concentrations are estimated by counting the number of particles in a rectangular volume (box counting). The effects of the choice of the width and of the position of these boxes on the estimated

Peter de Haan

1999-01-01

400

Population Indices Versus Correlated Density Estimates of Black-Footed Ferret Abundance

Estimating abundance of carnivore populations is problematic because individuals typically are elusive, nocturnal, and dispersed across the landscape. Rare or endangered carnivore populations are even more difficult to estimate because of small sample sizes. Considering behavioral ecology of the target species can drastically improve survey efficiency and effectiveness. Previously, abundance of the black-footed ferret (Mustela nigripes) was monitored by spotlighting

Martin B. Grenier; Steven W. Buskirk; Richard Anderson-Sprecher

2009-01-01

401

The estimation of the bispectral density function and the detection of periodicities in a signal

In a recent paper Subba Rao and Gabr (J. Time Ser. Anal. (1987), in press) considered the estimation of the spectrum and the inverse spectrum based on the method by Pisarenko (Geophys. J. Roy. Astronom. Soc. 28 (1972), 511-531). The asymptotic properties of these estimates were studied using the properties of Wishart matrices. In this paper we show how the

T. Subba Rao; M. M. Gabr

1988-01-01

402

In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

Zhang Yumin; Lum, Kai-Yew [Temasek Laboratories, National University of Singapore, Singapore 117508 (Singapore); Wang Qingguo [Depa. Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (Singapore)

2009-03-05

403

Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853

Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

2010-09-01

404

abstract:,For populations,with a density-dependent life history reproducing at discrete annual intervals, we analyze small or mod- erate fluctuations in population size around a stable equilibrium, which,is applicable to many,vertebrate populations. Using a life history having age at maturity a, with stochasticity and density de- pendence in adult recruitment and mortality, we derive a linearized autoregressive equation,with time lags from,1 to

R. Lande; S. Engen; F. Filli; E. Matthysen; H. Weimerskirch

2002-01-01

405

Density estimation and survey validation for swift fox Vulpes velox in Oklahoma

The swift fox Vulpes velox Say, 1823, a small canid native to shortgrass prairie ecosystems of North America, has been the subject of enhanced conservation\\u000a and research interest because of restricted distribution and low densities. Previous studies have described distributions\\u000a of the species in the southern Great Plains, but data on density are required to evaluate indices of relative abundance

Marc A. Criffield; Eric C. Hellgren; David M. LESLIE Jr

2010-01-01

406

Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.

Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.

2005-01-01

407

The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368

Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

2012-12-01

408

NASA Technical Reports Server (NTRS)

Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.

Jasinski, Michael F.; Crago, Richard

1994-01-01

409

In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

2011-01-01

410

The purpose of this study was to compare computed tomography density (?CT ) obtained using typical clinical computed tomography scan parameters to ash density (?ash ), for the prediction of densities of femoral head trabecular bone from hip fracture patients. An experimental study was conducted to investigate the relationships between ?ash and ?CT and between each of these densities and ?bulk and ?dry . Seven human femoral heads from hip fracture patients were computed tomography-scanned ex vivo, and 76 cylindrical trabecular bone specimens were collected. Computed tomography density was computed from computed tomography images by using a calibration Hounsfield units-based equation, whereas ?bulk, ?dry and ?ash were determined experimentally. A large variation was found in the mean Hounsfield units of the bone cores (HUcore) with a constant bias from ?CT to ?ash of 42.5 mg/cm(3). Computed tomography and ash densities were linearly correlated (R (2) = 0.55, p < 0.001). It was demonstrated that ?ash provided a good estimate of ?bulk (R (2) = 0.78, p < 0.001) and is a strong predictor of ?dry (R (2) = 0.99, p < 0.001). In addition, the ?CT was linearly related to ?bulk (R (2) = 0.43, p < 0.001) and ?dry (R (2) = 0.56, p < 0.001). In conclusion, mineral density was an appropriate predictor of ?bulk and ?dry , and ?CT was not a surrogate for ?ash . There were linear relationships between ?CT and physical densities; however, following the experimental protocols of this study to determine ?CT , considerable scatter was present in the ?CT relationships. PMID:24947202

Vivanco, Juan F; Burgers, Travis A; García-Rodríguez, Sylvana; Crookshank, Meghan; Kunz, Manuela; MacIntyre, Norma J; Harrison, Mark M; Bryant, J Tim; Sellens, Rick W; Ploeg, Heidi-Lynn

2014-06-19

411

Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term“macropore density” is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165

Arora, Bhavna; Mohanty, Binayak P.; McGuire, Jennifer T.

2013-01-01

412

NASA Astrophysics Data System (ADS)

This paper presents the application of artificial neural network (ANN) based pattern recognition to extract the density information of asphalt pavement from simulated ground penetrating radar (GPR) signals. This study is part of research efforts into the application of GPR to monitor asphalt pavement density during compaction. The main challenge is to eliminate the effect of roller-sprayed water on GPR signals during compaction and to extract density information accurately. A calibration of the excitation function was conducted to provide an accurate match between the simulated signal and the real signal. A modified electromagnetic mixing model was then used to calculate the dielectric constant of asphalt mixture with water. A large database of GPR responses was generated from pavement models having different air void contents and various surface moisture contents using finite-difference time-domain simulation. Feature extraction was performed to extract density-related features from the simulated GPR responses. Air void contents were divided into five classes representing different compaction statuses. An ANN-based pattern recognition system was trained using the extracted features as inputs and air void content classes as target outputs. Accuracy of the system was tested using test data set. Classification of air void contents using the developed algorithm is found to be highly accurate, which indicates effectiveness of this method to predict asphalt concrete density.

Shangguan, Pengcheng; Al-Qadi, Imad L.; Lahouar, Samer

2014-08-01

413

An estimate of the electron density in filaments of galaxies at z˜ 0.1

NASA Astrophysics Data System (ADS)

Most of the baryons in the Universe are thought to be contained within filaments of galaxies, but as yet, no single study has published the observed properties of a large sample of known filaments to determine typical physical characteristics such as temperature and electron density. This paper presents a comprehensive large-scale search conducted for X-ray emission from a population of 41 bona fide filaments of galaxies to determine their X-ray flux and electron density. The sample is generated from the filament catalogue of Pimbblet et al., which is in turn sourced from the two-degree Field Galaxy Redshift Survey (2dFGRS). Since the filaments are expected to be very faint and of very low density, we used stacked ROSAT All-Sky Survey data. We detect a net surface brightness from our sample of filaments of (1.6 ± 0.1) × 10-14 erg cm-2 s-1 arcmin-2 in the 0.9-1.3 keV energy band for 1-keV plasma, which implies an electron density of ne= (4.7 ± 0.2) × 10-4 h1/2100 cm-3. Finally, we examine if a filament’s membership to a supercluster leads to an enhanced electron density as reported by Kull & Böhringer. We suggest it remains unclear if supercluster membership causes such an enhancement.

Fraser-McKelvie, Amelia; Pimbblet, Kevin A.; Lazendic, Jasmina S.

2011-08-01

414

Accuracy of catch-effort methods for estimating fish density and biomass in streams

At each of 11 localities a section of stream was closed off with nets and an electrofisher used to estimate the abundance of fishes in the section. Each section was fished from 5–7 times with each fishing equalling one unit of effort. Using the catch-effort methods of Leslie, DeLury and Ricker, separate estimates were made for each species. In several

Robin Mahon

1980-01-01

415

Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variabledensity transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

Dafflon, Baptisite; Barrash, Warren; Cardiff, Michael A.; Johnson, Timothy C.

2011-12-15

416

This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

Bhattacharya, Abhishek; Dunson, David B.

2012-01-01

417

NASA Astrophysics Data System (ADS)

In radiotherapy, target motion during treatment delivery can be managed either by motion inclusive margins or by gating or tracking based on intrafraction target position monitoring. If radio-opaque fiducial markers are used the required three-dimensional (3D) target position signal for gating or tracking can be obtained by simultaneous acquisition of two x-ray images from different angles. However, most treatment machines do not have such stereoscopic imaging capability. Alternatively, the 3D target position may be estimated with a single imager (monoscopic imaging) although it only provides the projected target position in the two dimensions of the imager plane. In this study, we developed a probability-based method to estimate the unresolved motion component parallel to the imager axis from the projected motion. A 3D Gaussian probability density was assumed for the target position. Projection of the target into a certain point on the imager means that it is located on the ray line that connects this point with the focus point of the x-ray source. The 1D probability density along this line was calculated from the 3D probability density and its expectation value was used as the estimate for the unresolved position. The mathematical framework of the method was developed including analytical expressions for the estimated unresolved component as a function of resolved components and for the estimation uncertainty. Use of the method was demonstrated for prostate in a simulation study of monoscopic imaging. First, the required 3D probability density was constructed as a population average from a data set consisting of 536 continuous prostate position tracks from 17 patients recorded at 10 Hz. Next, monoscopic imaging at a fixed imaging angle and imaging frequency was simulated for each prostate track. Estimated 3D prostate tracks were constructed from the simulated projection images by the proposed method and compared with the actual tracks in order to determine the root-mean-square (rms) error. The simulations were performed with imaging angles in the range from 0° to 180° (relative to vertical) and imaging frequencies in the range from 0.1 s (corresponding to continuous imaging) to 600 s (corresponding to no intrafraction imaging). For comparison, simulations were also performed with stereoscopic imaging, where perfect position determination in all three directions was assumed, and with monoscopic imaging without estimation of the unresolved motion, where the motion component along the imager axis was assumed to be zero. For continuous imaging, the accuracy of monoscopic imaging was limited by the uncertainty in the unresolved position estimation. The resulting vector rms error for the population corresponded closely to the theoretically derived estimation uncertainty. The estimation did not improve the accuracy of lateral monoscopic imaging, but it reduced the population rms error from 1.59 mm to 1.11 mm for vertical imaging. This improvement was most prominent for outlying tracks with large unresolved motion. Stereoscopic imaging was clearly superior to monoscopic imaging for high frequency imaging. For less frequent imaging, the accuracy of both monoscopic and stereoscopic imaging decreased due to target motion between images. Since this was most prominent for stereoscopic imaging, the difference in accuracy between monoscopic and stereoscopic imaging decreased with increasing imaging period. In conclusion, a method for estimation of the 3D target position from 2D projections has been developed and its use has been demonstrated in a simulation study of monoscopic prostate tracking.

Rugaard Poulsen, Per; Cho, Byungchul; Langen, Katja; Kupelian, Patrick; Keall, Paul J.

2008-08-01

418

In this study, the status of boron intake was evaluated and its relation with bone mineral density was examined among free-living female subjects in Korea. Boron intake was estimated through the use of the database of boron content in frequently consumed foods by Korean people as well as measuring bone mineral density, taking anthropometric measurements, and surveying dietary intake of 134 adult females in order to relatively evaluate the intake of boron as a nutrient to supplement the low level of calcium intake and to verify its relationship with bone mineral density. Average age, height, and weight of the subjects were respectively 40.84 years, 157.62 cm and 59.70 kg. Also, average bone mineral density of lumbar spine L1-L4 and average bone mineral density of the femoral neck were 0.92 g/cm(2) and 0.80 g/cm(2), respectively. Their average intakes of energy and boron per day were 6,538.53 kJ and 926.94 microg. Intake of boron through vegetables, fruits, and cereals accounted for 61.72% of the overall boron intake. The food item that contributed most to their daily boron intake was rice. Also, 65.41% of overall boron intake was from 30 varieties of other food items, such as soybean paste, soybeans, red beans, watermelons, oriental melons, pears, Chinese cabbage Kimchi, soybean sprouts and soybean milk, etc. Boron intake did not show significant relation to bone mineral density in lumbar vertebra and femoral region. In summary, the average daily intake of boron was 926.94 microg and did not display significant relation to bone mineral density in 134 free-living female subjects. The continuous evaluation of boron consumption by more diverse targets will need to be conducted in the future. PMID:18575817

Kim, Mi-Hyun; Bae, Yun-Jung; Lee, Yoon-Shin; Choi, Mi-Kyeong

2008-12-01

419

NASA Astrophysics Data System (ADS)

Stellar membership determination of an open cluster is an important process to do before further analysis. Basically, there are two classes of membership determination method: parametric and non-parametric. In this study, an alternative of non-parametric method based on Binned Kernel Density Estimation that accounts measurements errors (simply called BKDE- e) is proposed. This method is applied upon proper motions data to determine cluster's membership kinematically and estimate the average proper motions of the cluster. Monte Carlo simulations show that the average proper motions determination using this proposed method is statistically more accurate than ordinary Kernel Density Estimator (KDE). By including measurement errors in the calculation, the mode location from the resulting density estimate is less sensitive to non-physical or stochastic fluctuation as compared to ordinary KDE that excludes measurement errors. For the typical mean measurement error of 7 mas/yr, BKDE- e suppresses the potential of miscalculation by a factor of two compared to KDE. With median accuracy of about 93 %, BKDE- e method has comparable accuracy with respect to parametric method (modified Sanders algorithm). Application to real data from The Fourth USNO CCD Astrograph Catalog (UCAC4), especially to NGC 2682 is also performed. The mode of member stars distribution on Vector Point Diagram is located at ? ? cos ?=-9.94±0.85 mas/yr and ? ? =-4.92±0.88 mas/yr. Although the BKDE- e performance does not overtake parametric approach, it serves a new view of doing membership analysis, expandable to astrometric and photometric data or even in binary cluster search.

Priyatikanto, R.; Arifyanto, M. I.

2015-01-01

420

We evaluated bioelectrical impedance analysis (BIA) as a nonlethal means of predicting energy density and percent lipids for three fish species: Yellow perch Perca flavescens, walleye Sander vitreus, and lake whitefish Coregonus clupeaformis. Although models that combined BIA measures with fish wet mass provided strong predictions of total energy, total lipids, and total dry mass for whole fish, including BIA

Steven A. Pothoven; Stuart A. Ludsin; Tomas O. Höök; David L. Fanslow; Doran M. Mason; Paris D. Collingsworth; Jason J. Van Tassell

2008-01-01

421

PELLET COUNT INDICES COMPARED TO MARK-RECAPTURE ESTIMATES FOR EVALUATING SNOWSHOE HARE DENSITY

Snowshoe hares (Lepus americanus) undergo remarkable cycles and are the primary prey base of Canada lynx (Lynx canadensis), a carnivore recently listed as threatened in the contiguous United States. Efforts to evalu- ate hare densities using pellets have traditionally been based on regression equations developed in the Yukon, Canada. In western Montana, we evaluated whether or not local regression equations

L. SCOTT MILLS; KAREN E. HODGES

422

Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

Technology Transfer Automated Retrieval System (TEKTRAN)

The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...

423

Estimating low-density snowshoe hare populations using fecal pellet counts

(1 m2 ) circular plots (metre-circle plots). Metre-circle plots had higher pellet prevalence, lower circular plots required less establishment time, and observer training re- duced the pellet-count bias than did metre-circle plots. The relationship between pellet density and hare number may have been

424

An Evaluation of Linearly Combining Density Estimators via Technical Report No. 98--25,

and Computer Science University of California, Irvine CA 92697Â3425 smyth@ics.uci.edu David Wolpert NASA Ames--25, Information and Computer Science Department, University of California, Irvine Padhraic Smyth 1 Information indicate a particular model, i.e., a particular mapping taking a parameter vector to a density. Let ` M

Smyth, Padhraic

425

We present a new, robust and computationally efficient methodforestimatingthe probabilitydensity of the intensity values in an image. Our approach makes use of a continu- ous representation of the image and develops a relation be- tween probability density at a particular intensity value and image gradients along the level sets at that value. Unlike traditional sample-based methods such as histograms, min-

Ajit Rajwade; Arunava Banerjee; Anand Rangarajan

2006-01-01

426

Reported here is a phantom-based comparison of methods for determining the power spectral density of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing ?(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law ?(f)=?0f?, was estimated using a reference phantom method. The power spectral density as estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter estimation region. Errors were quantified by the bias and standard deviation of the ?0 and ? estimates, and by the overall power-law fit error. For parameter estimation regions larger than ~34 pulse lengths (~1cm for this experiment), an overall power-law fit error of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the ?0 and ? estimates depended on the size of the parameter estimation region. Here the multitaper method reduced the standard deviation of the ?0 and ? estimates compared to those using the other techniques. Results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound. PMID:23858055

Rosado-Mendez, Ivan M.; Nam, Kibo; Hall, Timothy J.; Zagzebski, James A.

2013-01-01

427

This report documents primate communities at two sites within Noel Kempff Mercado National Park in northeastern Santa Cruz Department, Bolivia. Diurnal line transects and incidental observations were employed to survey two field sites, Lago Caiman and Las Gamas, providing information on primate diversity, habitat preferences, relative abundance, and population density. Primate diversity at both sites was not particularly high, with six observed species: Callithrix argentata melanura, Aotus azarae, Cebus apella, Alouatta caraya, A. seniculus, and Ateles paniscus chamek. Cebus showed no significant habitat preferences at Lago Caiman and was also more generalist in use of forest strata, whereas Ateles clearly preferred the upper levels of structurally tall forest. Callithrix argentata melanura was rarely encountered during surveys at Lago Caiman, where it preferred low vine forest. Both species of Alouatta showed restricted habitat use and were sympatric in Igapo forest in the Lago Caiman area. The most abundant primate at both field sites was Ateles, with density estimates reaching 32.1 individuals/km2 in the lowland forest at Lago Caiman, compared to 14.1 individuals/km2 for Cebus. Both Ateles and Cebus were absent from smaller patches of gallery forest at Las Gamas. These densities are compared with estimates from other Neotropical sites. The diversity of habitats and their different floristic composition may account for the numerical dominance of Ateles within the primate communities at both sites. PMID:9802511

Wallace, R B; Painter, R L; Taber, A B

1998-01-01

428

Technology Transfer Automated Retrieval System (TEKTRAN)

Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

429

A Weighted k-Nearest Neighbor Density Estimate for Geometric Inference

The problem of recovering topological and geometric information from mul- tivariate data has attracted . In this stochastic framework, the problem of estimating the sup- port of Âµ and its geometric properties (e ball of small radius, around data points inside the sup- port set (Devroye and Wise [19]). Object

Biau, GÃ©rard