For comprehensive and current results, perform a real-time search at Science.gov.

1

Stand density or tree density, expressed as the number of trees per unit area, is an important forest management parameter. It is used by foresters to evaluate regeneration, to assess the effect of forest management measures or as an indicator variable for other stand parameters like age, basal area and volume. A stand density estimation technique, based on the application

Lieven P. C. Verbeke; Frieke M. B. Van Coillie; Robert R. De Wulf

2

A simple statistical analysis of wavelet-based multifractal spectrum estimation

The multifractal spectrum characterizes the scaling and singularity structures of signals and proves useful in numerous applications, from network traffic analysis to turbulence. Of great concern is the estimation of the spectrum from a finite data record. We derive asymptotic expressions for the bias and variance of a wavelet-based estimator for a fractional Brownian motion (fBm) process. Numerous numerical simulations

Paulo Goncalves; Rudolf Riedi; Richard Baraniuk

1998-01-01

3

Wavelet-based seismic signal estimation, detection and classification via Bayes theorem

NASA Astrophysics Data System (ADS)

An application of Bayes theorem to seismic signal estimation, detection and classification is implemented with seismic events modeled as a superposition of wavelet bases. An empirical Bayes estimator is derived based on best basis arguments over block adaptive wavelet packet bases conditioned on known subband noise variances. A modified entropy functional is derived and the estimator is shown to be an adaptive shrinkage operator of coefficients in the best basis representation. Adaptation results from the updating of subband noise variance estimates. A novel robust variance estimator is presented for this context that outperforms the median based estimator for the longitudinal estimation of variance. The algorithm is tested on synthetic seismic events and compared to the discrete wavelet transform (DWT) as well as best basis selection via minimization of Stein's unbiased risk. Improvements in estimation in terms of mean squared error are sensible with the improved sparsity of representation that the best basis yields at moderate and high signal to noise ratios. An application to seismic event detection, feature extraction and classification has been developed as well. Detection and feature extraction is based on the estimated coefficients of the DWT of the seismic event by choosing bases that are known a priori to communicate useful information for discrimination. Classification of events into one of the following classes: teleseisms, regional earthquakes, near earthquakes, quarry blasts, and false alarms is accomplished with conditional class densities derived from training data by finding the maximum a posteriori probability using an empirical Bayes procedure. This algorithm is tested for detection and classification performance on the New England Seismological Network. This detection algorithm exhibits a likelihood of detection 2 times greater than that of the widely used energy transient measure termed "short-term average/long term average" (STA/LTA) under typical wideband network constraints in arbitrary conditions. Classification of seismic events via this method achieves an approximate 70% correct identification rate over a broad range of data test sets relative to a human viewer.

Gendron, Paul J.

4

Estimation of Modal Parameters Using a Wavelet-Based Approach

NASA Technical Reports Server (NTRS)

Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

Lind, Rick; Brenner, Marty; Haley, Sidney M.

1997-01-01

5

We propose a wavelet-based codec for the static depth-image-based representation, which allows viewers to freely choose the viewpoint. The proposed codec jointly estimates and encodes the unknown depth map from multiple views using a novel rate-distortion (RD) optimization scheme. The rate constraint reduces the ambiguity of depth estimation by favoring piece- wise-smooth depth maps. The optimization is efficiently solved by

Matthieu Maitre; Yoshihisa Shinagawa; Minh N. Do

2008-01-01

6

Wavelet-based analysis and power law classification of C/NOFS high-resolution electron density data

NASA Astrophysics Data System (ADS)

This paper applies new wavelet-based analysis procedures to low Earth-orbiting satellite measurements of equatorial ionospheric structure. The analysis was applied to high-resolution data from 285 Communications/Navigation Outage Forecasting System (C/NOFS) satellite orbits sampling the postsunset period at geomagnetic equatorial latitudes. The data were acquired during a period of progressively intensifying equatorial structure. The sampled altitude range varied from 400 to 800 km. The varying scan velocity remained within 20° of the cross-field direction. Time-to-space interpolation generated uniform samples at approximately 8 m. A maximum segmentation length that supports stochastic structure characterization was identified. A two-component inverse power law model was fit to scale spectra derived from each segment together with a goodness-of-fit measure. Inverse power law parameters derived from the scale spectra were used to classify the scale spectra by type. The largest category was characterized by a single inverse power law with a mean spectral index somewhat larger than 2. No systematic departure from the inverse power law was observed to scales greater than 100 km. A small subset of the most highly disturbed passes at the lowest sampled altitudes could be categorized by two-component power law spectra with a range of break scales from less than 100 m to several kilometers. The results are discussed within the context of other analyses of in situ data and spectral characteristics used for scintillation analyses.

Rino, C. L.; Carrano, C. S.; Roddy, Patrick

2014-08-01

7

Information geometric density estimation

NASA Astrophysics Data System (ADS)

We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.

Sun, Ke; Marchand-Maillet, Stéphane

2015-01-01

8

Density-difference estimation.

We address the problem of estimating the difference between two probability densities. A naive approach is a two-step procedure of first estimating two densities separately and then computing their difference. However, this procedure does not necessarily work well because the first step is performed without regard to the second step, and thus a small estimation error incurred in the first stage can cause a big error in the second stage. In this letter, we propose a single-shot procedure for directly estimating the density difference without separately estimating two densities. We derive a nonparametric finite-sample error bound for the proposed single-shot density-difference estimator and show that it achieves the optimal convergence rate. We then show how the proposed density-difference estimator can be used in L²-distance approximation. Finally, we experimentally demonstrate the usefulness of the proposed method in robust distribution comparison such as class-prior estimation and change-point detection. PMID:23777524

Sugiyama, Masashi; Kanamori, Takafumi; Suzuki, Taiji; du Plessis, Marthinus Christoffel; Liu, Song; Takeuchi, Ichiro

2013-10-01

9

Numerical estimation of densities

We present a novel technique, dubbed FIESTAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FIESTAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a

Y. Ascasibar; J. Binney

2005-01-01

10

Minimum complexity density estimation

The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum complexity estimator converges to

Andrew R. Barron; Thomas M. Cover

1991-01-01

11

Conditional Density Estimation with Class Probability Estimators

Conditional Density Estimation with Class Probability Estimators Eibe Frank and Remco R. Bouckaert to quantify the uncertainty inherent in a prediction. If a conditional density estimate is available conditional density estimates using a class proba- bility estimator, where this estimator is applied

Frank, Eibe

12

Ultrasound image deconvolution in symmetrical mirror wavelet bases

NASA Astrophysics Data System (ADS)

Observed medical ultrasound images are degraded representations of true tissue images. The degradation is a combination of blurring due to the finite resolution of the imaging system and the observation noise. This paper presents a new wavelet based deconvolution method for medical ultrasound imaging. We design a new orthogonal wavelet basis known as the symmetrical mirror wavelet basis that can provide more desirable frequency resolution. Our proposed ultrasound image restoration with wavelets consists of an inversion of the observed ultrasound image using the estimated two-dimensional (2-D) point spread function (PSF) followed by denoising in the designed wavelet basis. The tissue image restoration is then accomplished by modelling the tissue structures with the generalized Gaussian density (GGD) function using the Bayesian estimation. Both subjective and objective measures show that the deconvolved images are more appealing in the visualization and resolution gain.

Yeoh, Wee Soon; Zhang, Cishen; Chen, Ming; Yan, Ming

2006-03-01

13

Conditional Density Estimation via Least-Squares Density Ratio Estimation

781 Conditional Density Estimation via Least-Squares Density Ratio Estimation Masashi Sugiyama a novel method of con- ditional density estimation. Our basic idea is to express the conditional density in terms of the ratio of unconditional densities, and the ratio is directly estimated without going through

Sugiyama, Masashi

14

Numerical estimation of densities

[Abridged] We present a novel technique, dubbed FiEstAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FiEstAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a Hernquist profile and recover the particle density in both real and phase space. At a given point, Poisson noise causes the unsmoothed estimates to fluctuate by a factor ~2 regardless of the number of particles. This spread can be reduced to about 1 dex (~26 per cent) by our smoothing procedure. [...] We conclude that our algorithm accurately measure the phase-space density up to the limit where discreteness effects render the simulation itself unreliable. Computationally, FiEstAS is orders of magnitude faster than the method based on Delaunay tessellation that Arad et al. employed, making it practicable to recover smoothed density estimates for sets of 10^9 points in 6 dimensions.

Y. Ascasibar; J. Binney

2004-09-09

15

Contingent Kernel Density Estimation

Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a “contingent kernel density estimation” technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method. PMID:22383966

Fortmann-Roe, Scott; Starfield, Richard; Getz, Wayne M.

2012-01-01

16

Forest Density Estimation Anupam Gupta

Forest Density Estimation Anupam Gupta , John Lafferty , Han Liu , Larry Wasserman , Min Xu School and density estimation in high dimensions, using a family of density estimators based on forest structured to a forest; rather, we form kernel density estimates of the bivariate and univariate marginals, and apply

Guestrin, Carlos

17

Airborne Crowd Density Estimation

NASA Astrophysics Data System (ADS)

This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

Meynberg, O.; Kuschk, G.

2013-10-01

18

Wavelet-based polarimetry analysis

NASA Astrophysics Data System (ADS)

Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

2014-06-01

19

NASA Technical Reports Server (NTRS)

Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

Jameson, Leland

1996-01-01

20

NASA Astrophysics Data System (ADS)

The inverse problem of recovering the Earth's density distribution from data of the first or second derivative of the gravitational potential at satellite orbit height is discussed for a ball-shaped Earth. This problem is exponentially ill-posed. In this paper, a multiscale regularization technique using scaling functions and wavelets constructed for the corresponding integro-differential equations is introduced and its numerical applications are discussed. In the numerical part, the second radial derivative of the gravitational potential at 200 km orbit height is calculated on a point grid out of the NASA/GSFC/NIMA Earth Geopotential Model (EGM96). Those simulated derived data out of SGG (satellite gravity gradiometry) satellite measurements are taken for convolutions with the introduced scaling functions yielding a multiresolution analysis of harmonic density variations in the Earth's crust. Moreover, the noise sensitivity of the regularization technique is analysed numerically.

Michel, Volker

2005-06-01

21

Density Estimation with Mercer Kernels

NASA Technical Reports Server (NTRS)

We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

Macready, William G.

2003-01-01

22

Nonparametric Density Estimation using Wavelets Marina Vannucci #

Nonparametric Density Estimation using Wavelets Marina Vannucci # Department of Statistics, Texas A revision September 1998 Abstract Here the problem of density estimation using wavelets is considered. Nonparametric wavelet density estimators have recently been proposed and seem to outperform classical estimators

West, Mike

23

Computational validation of fractal characterization by using the wavelet-based fractal analysis

NASA Astrophysics Data System (ADS)

In this paper, the performance of the wavelet-based fractal analysis for fractal characterization is computationally validated. In the wavelet-based fractal analysis, the spectral exponent ? that is related to the scaling exponent ? characterizing the long-range correlation is obtained from the slope of the log-variance of the wavelet coefficients versus scale graph. The wavelet-based fractal analysis is applied to a set of simulated time series associated with various scaling exponents. The scaling exponents derived from the corresponding spectral exponents of the simulated time series are examined. From the computational results with the 4th-order Daubechies wavelet bases, the wavelet-based fractal analysis is shown to be able to quantify the scaling exponents excellently and provide better performance compared to the commonly-used detrended fluctuation analysis (DFA). Overall, the average error in the estimate of the scaling exponent by using the wavelet-based fractal analysis is less than 2.50%. However, the multifractal detrended fluctuation analysis shows that the simulated time series exhibit weak multifractal characteristics.

Janjarasjitt, Suparerk

2014-03-01

24

Estimation of coastal density gradients

NASA Astrophysics Data System (ADS)

Density gradients in coastal regions with significant freshwater input are large and variable and are a major control of nearshore circulation. However their measurement is difficult, especially where the gradients are largest close to the coast, with significant uncertainties because of a variety of factors - spatial and time scales are small, tidal currents are strong and water depths shallow. Whilst temperature measurements are relatively straightforward, measurements of salinity (the dominant control of spatial variability) can be less reliable in turbid coastal waters. Liverpool Bay has strong tidal mixing and receives fresh water principally from the Dee, Mersey, Ribble and Conwy estuaries, each with different catchment influences. Horizontal and vertical density gradients are variable both in space and time. The water column stratifies intermittently. A Coastal Observatory has been operational since 2002 with regular (quasi monthly) CTD surveys on a 9 km grid, an situ station, an instrumented ferry travelling between Birkenhead and Dublin and a shore-based HF radar system measuring surface currents and waves. These measurements are complementary, each having different space-time characteristics. For coastal gradients the ferry is particularly useful since measurements are made right from the mouth of Mersey. From measurements at the in situ site alone density gradients can only be estimated from the tidal excursion. A suite of coupled physical, wave and ecological models are run in association with these measurements. The models, here on a 1.8 km grid, enable detailed estimation of nearshore density gradients, provided appropriate river run-off data are available. Examples are presented of the density gradients estimated from the different measurements and models, together with accuracies and uncertainties, showing that systematic time series measurements within a few kilometres of the coast are a high priority. (Here gliders are an exciting prospect for detailed regular measurements to fill this gap.) The consequences for and sensitivity of circulation estimates are presented using both numerical and analytic models.

Howarth, M. J.; Palmer, M. R.; Polton, J. A.; O'Neill, C. K.

2012-04-01

25

Wavelet-based modal analysis for time-variant systems

NASA Astrophysics Data System (ADS)

The paper presents algorithms for modal identification of time-variant systems. These algorithms utilise the wavelet-based Frequency Response Function, and lead to estimation of all three modal parameters, i.e. natural frequencies, damping and mode shapes. The method presented utilises random impact excitation and signal post-processing based on the crazy climbers Algorithm. The method is validated using simulated and experimental data from vibration time-variant systems. The results show that the method captures correctly the dynamics of the analysed systems, leading to correct modal parameter identification.

Dziedziech, K.; Staszewski, W. J.; Uhl, T.

2015-01-01

26

Wavelet-based deconvolution of ultrasonic signals in nondestructive evaluation

In this paper, the inverse problem of reconstructing reflectivity function of a medium is examined within a blind deconvolution framework. The ultrasound pulse is estimated using higher-order statistics, and Wiener filter is used to obtain the ultrasonic reflectivity function through wavelet-based models. A new approach to the parameter estimation of the inverse filtering step is proposed in the nondestructive evaluation field, which is based on the theory of Fourier-Wavelet regularized deconvolution (ForWaRD). This new approach can be viewed as a solution to the open problem of adaptation of the ForWaRD framework to perform the convolution kernel estimation and deconvolution interdependently. The results indicate stable solutions of the estimated pulse and an improvement in the radio-frequency (RF) signal taking into account its signal-to-noise ratio (SNR) and axial resolution. Simulations and experiments showed that the proposed approach can provide robust and optimal estimates of the reflectivity function.

Herrera, Roberto Henry; Rodríguez, Manuel

2012-01-01

27

Wavelet-based digital image watermarking

NASA Astrophysics Data System (ADS)

A wavelet-based watermark casting scheme and a blind watermark retrieval technique are investigated in this research. An adaptive watermark casting method is developed to first determine significant wavelet subbands and then select a couple of significant wavelet coefficients in these subbands to embed watermarks. A blind watermark retrieval technique that can detect the embedded watermark without the help from the original image is proposed. Experimental results show that the embedded watermark is robust against various signal processing and compression attacks.

Wang, Houng-Jyh Mike; Su, Po-Chyi; Kuo, C.-C. Jay

1998-12-01

28

DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION

DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION ROGER KOENKER AND IVAN MIZERA Abstract. L1 based on total variation of the estimated density, its square root, and its logarithm Â and their derivatives Â in the context of univariate and bivariate density estimation, and compare the results to some

Mizera, Ivan

29

DENSITY ESTIMATION TECHNIQUES FOR GLOBAL ILLUMINATION

DENSITY ESTIMATION TECHNIQUES FOR GLOBAL ILLUMINATION A Dissertation Presented to the Faculty;DENSITY ESTIMATION TECHNIQUES FOR GLOBAL ILLUMINATION Bruce Jonathan Walter, Ph.D. Cornell University 1998 In this thesis we present the density estimation framework for computing view- independent global illumination

Keinan, Alon

30

Density-Difference Estimation Masashi Sugiyama1

Density-Difference Estimation Masashi Sugiyama1 Takafumi Kanamori2 Taiji Suzuki3 Marthinus-step procedure of first estimating two densities separately and then computing their difference. However the density difference without separately esti- mating two densities. We derive a non-parametric finite

Sugiyama, Masashi

31

Remote analysis of discrete self-similar objects from a wavelet-based partition function

The discrete self-similarity property of fractal objects may be remotely explored from the analyze of their impulse response. In this communication a wavelet-based partition function is introduced to estimate the similarity dimension of fractal objets from reflection data. This research work is placed in the framework of the analysis of waves reflected by multiscale objects and the remote description of

Y. Laksari; H. Aubert; D. L. Jaggard

2002-01-01

32

Wavelet-based digital image watermarking.

A wavelet-based watermark casting scheme and a blind watermark retrieval technique are investigated in this research. An adaptive watermark casting method is developed to first determine significant wavelet subbands and then select a couple of significant wavelet coefficients in these subbands to embed watermarks. A blind watermark retrieval technique that can detect the embedded watermark without the help from the original image is proposed. Experimental results show that the embedded watermark is robust against various signal processing and compression attacks. PMID:19384400

Wang, H J; Su, P C; Kuo, C C

1998-12-01

33

Wavelet-based ultrasound image denoising: performance analysis and comparison.

Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

2011-01-01

34

Risk Bounds for Mixture Density Estimation

In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by ...

Rakhlin, Alexander

2004-01-27

35

Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing

in the JPEG2000 standard [3]. Wavelet-based transform coding [4] explicitly exploits the structure [5 wavelet-based compression algorithms, and specifically JPEG2000. Transform coding, particularly JPEG and JPEG2000, are now widely used in digital media. One observes, however, that after the digital data

Carin, Lawrence

36

Density Estimation Trees in High Energy Physics

Density Estimation Trees can play an important role in exploratory data analysis for multidimensional, multi-modal data models of large samples. I briefly discuss the algorithm, a self-optimization technique based on kernel density estimation, and some applications in High Energy Physics.

Anderlini, Lucio

2015-01-01

37

Bayesian Density Estimation and Inference Using Mixtures

We describe and illustrate Bayesian inference in models for density estimation using mixturesof Dirichlet processes. These models provide natural settings for density estimation,and are exemplified by special cases where data are modelled as a sample from mixtures ofnormal distributions. Efficient simulation methods are used to approximate various prior,posterior and predictive distributions. This allows for direct inference on a variety of

Michael D. Escobar; Mike West

1994-01-01

38

Topics in global convergence of density estimates

NASA Technical Reports Server (NTRS)

The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

Devroye, L.

1982-01-01

39

Risk Bounds for Mixture Density Estimation

Research, Inc., Sony MOU, Sumitomo Metal Industries, Toyota Motor Corporation, WatchVision Co., Ltd and Barron [6, 7] and prove estimation bounds for these procedures. Rates of convergence for density

40

Direct Density Ratio Estimation with Dimensionality Reduction Masashi Sugiyama

Direct Density Ratio Estimation with Dimensionality Reduction Masashi Sugiyama , Satoshi Hara for directly estimating the ratio of two probability density functions without going through density estimation such as non-stationarity adaptation, outlier detection, conditional density estima- tion, feature selection

Sugiyama, Masashi

41

An online gain scheduling of the Wavelet-based controller coefficients is presented in this paper for the purpose of reducing noise and vibration resulting from a sensorless control of PM machines at low speeds. The Wavelet-based controller decomposes the whole frequency spectrum into several sub-bands. Each sub-band has a unique effect on the position and speed estimation error in the transient

Arash Nejadpak; Ahmed Mohamed; Osama A. Mohammed; Ahmad Arshan Khan

2011-01-01

42

ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

43

Estimation of neural firing rate: the wavelet density estimation approach.

The computation of neural firing rates based on spike sequences has been introduced as a useful tool for extraction of an animal's behavior. Different estimating methods of such neural firing rates have been developed by neuroscientists, and among these methods, time histogram and kernel estimators have been used more than other approaches. In this paper, the problem in the estimation of firing rates using wavelet density estimators has been considered. The results of simulation study in estimation of underlying rates based on spike sequences sampled from two different variable firing rates show that the proposed wavelet density method provides better and more accurate estimation of firing rates with smooth results compared to two other classical approaches. Furthermore, the performance of a different family of wavelet density estimators in the estimation of the underlying firing rate of biological data have been compared with results of both time histogram and kernel estimators. All in all, the results show that the proposed method can be useful in the estimation of firing rate of neural spike trains. PMID:23924519

Khorasani, Abed; Daliri, Mohammad Reza

2013-08-01

44

Wavelet-based analysis of circadian behavioral rhythms.

The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

Leise, Tanya L

2015-01-01

45

NASA Astrophysics Data System (ADS)

A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an effective, practical method to remove the baseline drift of GREATEM signals and its performance is significantly superior to the interpolation method.

Wang, Yuan ^{1}^{2}^{1}^{3}^{1}^{2}^{1}

2013-09-01

46

Density Estimation and Smoothing based on Regularised Optimal Transport

Density Estimation and Smoothing based on Regularised Optimal Transport Martin Burger Marzena nonparametric approach for estimating and smoothing densities based on a variational regularisation method model for special regularisation functionals yields a natural method for estimating densities

MÃ¼nster, WestfÃ¤lische Wilhelms-UniversitÃ¤t

47

Estimating and Interpreting Probability Density Functions

NSDL National Science Digital Library

This 294-page document from the Bank for International Settlements stems from the Estimating and Interpreting Probability Density Functions workshop held on June 14, 1999. The conference proceedings, which may be downloaded as a complete document or by chapter, are divided into two sections: "Estimation Techniques" and "Applications and Economic Interpretation." Both contain papers presented at the conference. Also included are a list of the program participants with their affiliations and email addresses, a forward, and background notes.

48

Estimating density of Florida Key deer

Florida Key deer (Odocoileus virginianus clavium) were listed as endangered by the U.S. Fish and Wildlife Service (USFWS) in 1967. A variety of survey methods have been used in estimating deer density and/or changes in population trends...

Roberts, Clay Walton

2006-08-16

49

ADAPTIVE DENSITY ESTIMATION WITH MASSIVE DATA SETS

ADAPTIVE DENSITY ESTIMATION WITH MASSIVE DATA SETS David W. Scott, Rice University Masahiko Sagae crossÂvalidation criterion. For massive data sets, the promise of having sufficient data to do loÂ cally will be provided. 1. Challenge of Massive Data Massive data sets (MDS) represent one of the grand challenge

Scott, David W.

50

Sampling, Density Estimation and Spatial Relationships

NSDL National Science Digital Library

This resource serves as a tool used for instructing a laboratory exercise in ecology. Students obtain hands-on experience using techniques such as, mark-recapture and density estimation and organisms such as, zooplankton and fathead minnows. This exercise is suitable for general ecology and introductory biology courses.

Maggie Haag (University of Alberta;); William M. Tonn (;)

1998-01-01

51

Estimating animal population density using passive acoustics.

Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

2013-05-01

52

Estimating animal population density using passive acoustics

Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

2013-01-01

53

Conditional Density Estimation in Measurement Error Problems.

This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

Wang, Xiao-Feng; Ye, Deping

2015-01-01

54

DENSITY ESTIMATION FOR PROJECTED EXOPLANET QUANTITIES

Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of 'projected' mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth {sigma}. We calibrate the dependence of {sigma} on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n{sup -log2}. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements-particularly on the minimum sample size n-for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

Brown, Robert A., E-mail: rbrown@stsci.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

2011-05-20

55

Coding sequence density estimation via topological pressure.

We give a new approach to coding sequence (CDS) density estimation in genomic analysis based on the topological pressure, which we develop from a well known concept in ergodic theory. Topological pressure measures the 'weighted information content' of a finite word, and incorporates 64 parameters which can be interpreted as a choice of weight for each nucleotide triplet. We train the parameters so that the topological pressure fits the observed coding sequence density on the human genome, and use this to give ab initio predictions of CDS density over windows of size around 66,000 bp on the genomes of Mus Musculus, Rhesus Macaque and Drososphilia Melanogaster. While the differences between these genomes are too great to expect that training on the human genome could predict, for example, the exact locations of genes, we demonstrate that our method gives reasonable estimates for the 'coarse scale' problem of predicting CDS density. Inspired again by ergodic theory, the weightings of the nucleotide triplets obtained from our training procedure are used to define a probability distribution on finite sequences, which can be used to distinguish between intron and exon sequences from the human genome of lengths between 750 and 5,000 bp. At the end of the paper, we explain the theoretical underpinning for our approach, which is the theory of Thermodynamic Formalism from the dynamical systems literature. Mathematica and MATLAB implementations of our method are available at http://sourceforge.net/projects/topologicalpres/ . PMID:24448658

Koslicki, David; Thompson, Daniel J

2015-01-01

56

Bird population density estimated from acoustic signals

Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

Dawson, D.K.; Efford, M.G.

2009-01-01

57

Wavelet-based feature extraction technique for fruit shape classification

For export, papaya fruit should be free of defects and damages. Abnormality in papaya fruit shape represents a defective fruit and is used as one of the main criteria to determine suitability of the fruit to be exported. This paper describes a wavelet-based technique used to perform feature extraction to extract unique features which are then used in the classification

Slamet Riyadi; A. J. Ishak; M. M. Mustafa; A. Hussain

2008-01-01

58

WAVELET-BASED IMAGE PROCESSING James S. Walker

known as spiht, aswdr, and the new standard jpeg2000, will be described and compared. Our com- parison. Wavelet-based algorithms out- perform the jpeg algorithm. The new jpeg algorithm, jpeg2000, uses a wavelet transform instead of a block-dct. Below we shall compare jpeg with jpeg2000 and two other wavelet transform

Walker, James S.

59

Trigonometric rational wavelet bases A.P.Petukhov

that the periodization of the Shannon scaling fuction leads to polynomial wavelets (they were studied in 2]). UnexpectedTrigonometric rational wavelet bases A.P.Petukhov January 5, 1999 We propose a construction of periodic rational bases of wavelets. First we explain why this problem is not trivial. Construction

Petukhov, Alexander

60

An EM algorithm for wavelet-based image restoration

Abstract: This paper introduces an expectation--maximization(EM) algorithm for image restoration (deconvolution) based on apenalized likelihood formulated in the wavelet domain. Regularizationis achieved by promoting a reconstruction with low-complexity,expressed in the wavelet coefficients, taking advantage ofthe well known sparsity of wavelet representations. Previous workshave investigated wavelet-based restoration but, except for certainspecial cases, the resulting criteria are solved...

Mário A. T. Figueiredo; Robert D. Nowak

2003-01-01

61

Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures

Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures Don Percival Applied monitoring program recently upgraded with perma- nent installation of vertical profilers at Lake Wivenhoe dam in a subtropical dam as a function of time and depth Â· will concentrate on a 600+ day segment of temperature fluc

Percival, Don

62

3D WAVELET-BASED COMPRESSION OF HYPERSPECTRAL IMAGERY

bands, un- compressed hyperspectral imagery can be very large, with a single image potentially ocCHAPTER 14 3D WAVELET-BASED COMPRESSION OF HYPERSPECTRAL IMAGERY James E. Fowler and Justin T to facilitate both the storage and the transmission of hyper- spectral images. Since hyperspectral imagery

Fowler, James E.

63

Fast Rendering of Foveated Volumes in Wavelet-based Representation

Fast Rendering of Foveated Volumes in Wavelet-based Representation Hang Yu, Ee-Chien Chang, Zhiyong,changec,huangzy,zhengzhi}@comp.nus.edu.sg Abstract A foveated volume can be viewed as a blending of multiple regions, each with a different level of wavelet coefficients retained for the foveated volume. Our algorithm consists of two phases. The first

Chang, Ee-Chien

64

3D Wavelet-Based Filter and Method

A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

Moss, William C. (San Mateo, CA); Haase, Sebastian (San Francisco, CA); Sedat, John W. (San Francisco, CA)

2008-08-12

65

DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda

DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda Malik Magdon Abstract In this paper we consider two important topics: density estimation and random variate generation. First, we develop two new methods for density estimation, a stochastic method and a related

Magdon-Ismail, Malik

66

Local multiplicative bias correction for asymmetric kernel density estimators

We consider semiparametric asymmetric kernel density estimators when the unknown density has support on [0,?). We provide a unifying framework which relies on a local multiplicative bias correction, and contains asymmetric kernel versions of several semiparametric density estimators considered previously in the literature. This framework allows us to use popular parametric models in a nonparametric fashion and yields estimators which

M. Hagmann; O. Scaillet

2007-01-01

67

ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER

This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...

68

Fast wavelet based algorithms for linear evolution equations

NASA Technical Reports Server (NTRS)

A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

1992-01-01

69

Analysis of a wavelet-based robust hash algorithm

NASA Astrophysics Data System (ADS)

This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

Meixner, Albert; Uhl, Andreas

2004-06-01

70

Theory of regular M-band wavelet bases

Orthonormal M-band wavelet bases have been constructed and applied by several authors. This paper makes three main contributions. First, it generalizes the minimal length K-regular 2-band wavelets of Daubechies (1988) to the M-band case by deriving explicit formulas for K-regular M-band scaling filters. Several equivalent characterizations of K-regularity are given and their significance explained. Second, two approaches to the construction

Peter Steffen; Peter N. Heller; Ramesh A. Gopinath; C. Sidney Burms

1993-01-01

71

Non-destructive wavelet-based despeckling in SAR images

NASA Astrophysics Data System (ADS)

The suggested wavelet-based despeckling method for multi-look SAR images does not use any thresholding and window processing to avoid ringing artifacts, blurring, fusion of edges, etc. Instead, the logical operation of comparison is applied to wavelet coefficients which are presented in spatial oriented trees (SOTs) of wavelet decomposition calculated for one and the same region of the earth surface during SAR spacecraft flight. Fusion of SAR images is decided by keeping the smallest wavelet coefficients from different SOTs in high frequency subbands (details). The wavelet coefficients related to the low frequency subband (approximation) are processed by another special logical operation providing with a good smoothing. It is because the described procedure depends on properties of the chosen wavelet basis then the library of wavelet bases is applied. The procedure is repeated for each wavelet basis. To select the best SOTs (and hence, the best wavelet basis) there is the special cost function which considers the SOTs as so-called coherent structures and shows which of wavelet bases brings the maximum entropy. The results of computer modeling and comparison with few well-known despeckling procedures have shown the superb quality of the proposed method in the sense of different criteria as PSNR, SSIM, etc.

Bekhtin, Yuri S.; Bryantsev, Andrey A.; Malebo, Damiao P.; Lupachev, Alexey A.

2014-10-01

72

A Wavelet-Based Assessment of Topographic-Isostatic Reductions for GOCE Gravity Gradients

NASA Astrophysics Data System (ADS)

Gravity gradient measurements from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE) contain significant high- and mid-frequency signal components, which are primarily caused by the attraction of the Earth's topographic and isostatic masses. In order to mitigate the resulting numerical instability of a harmonic downward continuation, the observed gradients can be smoothed with respect to topographic-isostatic effects using a remove-compute-restore technique. For this reason, topographic-isostatic reductions are calculated by forward modeling that employs the advanced Rock-Water-Ice methodology. The basis of this approach is a three-layer decomposition of the topography with variable density values and a modified Airy-Heiskanen isostatic concept incorporating a depth model of the Mohorovi?i? discontinuity. Moreover, tesseroid bodies are utilized for mass discretization and arranged on an ellipsoidal reference surface. To evaluate the degree of smoothing via topographic-isostatic reduction of GOCE gravity gradients, a wavelet-based assessment is presented in this paper and compared with statistical inferences in the space domain. Using the Morlet wavelet, continuous wavelet transforms are applied to measured GOCE gravity gradients before and after reducing topographic-isostatic signals. By analyzing a representative data set in the Himalayan region, an employment of the reductions leads to significantly smoothed gradients. In addition, smoothing effects that are invisible in the space domain can be detected in wavelet scalograms, making a wavelet-based spectral analysis a powerful tool.

Grombein, Thomas; Luo, Xiaoguang; Seitz, Kurt; Heck, Bernhard

2014-07-01

73

Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results. PMID:22887117

Paul, Sabyasachi; Sarkar, P K

2013-04-01

74

Traffic characterization and modeling of wavelet-based VBR encoded video

Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

1997-07-01

75

Computerized image analysis: estimation of breast density on mammograms

An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is

Chuan Zhou; Heang-Ping Chan; Nicholas Petrick; Berkman Sahiner; Mark A. Helvie; Marilyn A. Roubidoux; Lubomir M. Hadjiiski; Mitchell M. Goodsitt

2000-01-01

76

Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

2005-04-11

77

Remarks on Some Nonparametric Estimates of a Density Function

This note discusses some aspects of the estimation of the density function of a univariate probability distribution. All estimates of the density function satisfying relatively mild conditions are shown to be biased. The asymptotic mean square error of a particular class of estimates is evaluated.

Murray Rosenblatt

1956-01-01

78

Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

NASA Astrophysics Data System (ADS)

The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

2013-02-01

79

Time-Geographic Density Estimation for Moving Point Objects

\\u000a This research presents a time-geographic method of density estimation for moving point objects. The approach integrates traditional\\u000a kernel density estimation (KDE) with techniques of time geography to generate a continuous intensity surface that characterises\\u000a the spatial distribution of a moving object over a fixed time frame. This task is accomplished by computing density estimates\\u000a as a function of a geo-ellipse

Joni A. Downs

2010-01-01

80

ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY

ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY Daehyun Kim 1 ABSTRACT: At the ISSW 2000, Geldsetzer and Jamieson presented empirical relations between the density density and water equivalent (e.g. because the layer was too thin for the density sampler

Jamieson, Bruce

81

Wavelet-Based Signal and Image Processing for Target Recognition

NASA Astrophysics Data System (ADS)

The PI visited NSWC Dahlgren, VA, for six weeks in May-June 2002 and collaborated with scientists in the G33 TEAMS facility, and with Marilyn Rudzinsky of T44 Technology and Photonic Systems Branch. During this visit the PI also presented six educational seminars to NSWC scientists on various aspects of signal processing. Several items from the grant proposal were completed, including (1) wavelet-based algorithms for interpolation of 1-d signals and 2-d images; (2) Discrete Wavelet Transform domain based algorithms for filtering of image data; (3) wavelet-based smoothing of image sequence data originally obtained for the CRITTIR (Clutter Rejection Involving Temporal Techniques in the Infra-Red) project. The PI visited the University of Stellenbosch, South Africa to collaborate with colleagues Prof. B.M. Herbst and Prof. J. du Preez on the use of wavelet image processing in conjunction with pattern recognition techniques. The University of Stellenbosch has offered the PI partial funding to support a sabbatical visit in Fall 2003, the primary purpose of which is to enable the PI to develop and enhance his expertise in Pattern Recognition. During the first year, the grant supported publication of 3 referred papers, presentation of 9 seminars and an intensive two-day course on wavelet theory. The grant supported the work of two students who functioned as research assistants.

Sherlock, Barry G.

2002-11-01

82

Kernel density estimator methods for Monte Carlo radiation transport

In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for

Kaushik Banerjee

2010-01-01

83

Review of methods for estimating cetacean density from passive

species occur at very low density over very large areas Â Many of these areas are hard (expensive the histogram bars should be here =p^ area under curve area under rectangle #12;Estimating p 0 2 4 6 8 10 0;Part I: Review #12;Goal Â· Estimate population size/density of cetacean species Â· Problems: Â Many

Thomas, Len

84

Density estimation using the trapping web design: A geometric analysis

Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

Link, W.A.; Barker, R.J.

1994-01-01

85

A Histogram Transform for Probability Density Function Estimation.

The estimation of multivariate probability density functions has traditionally been carried out by mixtures of parametric densities or by kernel density estimators. Here we present a new nonparametric approach to this problem which is based on the integration of several multivariate histograms, computed over affine transformations of the training data. Our proposal belongs to the class of averaged histogram density estimators. The inherent discontinuities of the histograms are smoothed, while their low computational complexity is retained. We provide a formal proof of the convergence to the real probability density function as the number of training samples grows, and we demonstrate the performance of our approach when compared with a set of standard probability density estimators. PMID:24344083

López-Rubio, Ezequiel

2013-12-11

86

Wavelet-based Image Compression on the Reconfigurable Computer ACE-V

Huffman, Shannon-Fano and arithmetic coding. #12;Figure 1. Versatility structure Wavelet Transform RunWavelet-based Image Compression on the Reconfigurable Computer ACE-V Hagen GÂ¨adke and Andreas Koch@eis.cs.tu-bs.de Abstract. Wavelet-based image compression has been suggested previously as a means to evaluate and compare

87

Nonparametric estimation of plant density by the distance method

A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

Patil, S.A.; Burnham, K.P.; Kovner, J.L.

1979-01-01

88

Relative Density-Ratio Estimation for Robust Distribution Comparison

Relative Density-Ratio Estimation for Robust Distribution Comparison Makoto Yamada Tokyo Institute approximation of density-ratios without go- ing through separate approximation of numerator and denominator densities have been successfully applied to machine learning tasks that involve distribution com- parison

Sugiyama, Masashi

89

Estimation of volumetric breast density for breast cancer risk prediction

Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the

Olga Pawluczyk; Martin J. Yaffe; Norman F. Boyd; Roberta A. Jong

2000-01-01

90

Density Ratio Estimation: A New Versatile Tool for Machine Learning

Density Ratio Estimation: A New Versatile Tool for Machine Learning Masashi Sugiyama Department based on the ratio of prob- ability densities has been proposed recently and gathers a great deal of attention in the machine learning and data mining communities [1Â17]. This density ratio framework includes

Sugiyama, Masashi

91

Morphology driven density distribution estimation for small bodies

NASA Astrophysics Data System (ADS)

We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field environment in close proximity to small body’s surface for accurate trajectory/safe navigation purposes to be used for future missions.

Takahashi, Yu; Scheeres, D. J.

2014-05-01

92

Wavelet-based image analysis system for soil texture analysis

NASA Astrophysics Data System (ADS)

Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

2003-05-01

93

Wavelet based free-form deformations for nonrigid registration

NASA Astrophysics Data System (ADS)

In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

Sun, Wei; Niessen, Wiro J.; Klein, Stefan

2014-03-01

94

Wavelet-based multifractal analysis of laser biopsy imagery

NASA Astrophysics Data System (ADS)

In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.

Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima

2012-03-01

95

Review of methods for estimating cetacean density from passivecetacean density from passive

acoustics www.creem.st-and.ac.uk/decaf/ www.voicesinthesea.org Len Thomas and Tiago Marques 1st International Workshop on Density Estimation of1 International Workshop on Density Estimation of Marine Mammals Using Passive Acoustics 13th September 2009 #12;Â· 3-year project: May 2007-2010 Â· Objectives

Thomas, Len

96

Optimum nonparametric estimation of population density based on ordered distances

The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

1982-01-01

97

Unbiased estimators of wildlife population densities using aural information

UNBIASED ESTIMATORS OF WILDLIFE POPULATION DENSITIES USING AURAL INFORMATION A Thesis by ERIC NEWTON DURLAND Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree MASTER OF SCIENCE... May 1969 Ma]or Sub]ect: Statistics UNBIASED ESTIMATORS OF WILDLIFE POPULATION DENSITIES USING AURAL INFORMATION A Thesis by ERIC NEWTON DURLAND Approved as to sty1e and content by: (Chairm n of gommittee) Head of Departmen (Member) (Memb r...

Durland, Eric Newton

1969-01-01

98

Analysis of Distributed Algorithms for Density Estimation in VANETs (Poster)

deals with the physical and medium access control layers of the network stack allowing communication) networks, to the infrastructure-free vehicle density estimation in highly mobile VANET, and analyze@ku.edu.tr Abstract--Vehicle density is an important system metric used in monitoring road traffic conditions. Most

Ã?zkasap, Ã?znur

99

Estimating Neutral Densities from Energy Sources Using Multiple Linear Regression

NASA Astrophysics Data System (ADS)

Space operations involving satellite tracking requires estimates of the atmosphere's neutral density in order to determine the drag on the satellites. The neutral density models use F10.7 as a proxy for solar EUV input and the ap index as a proxy for the geomagnetic input. These models are typically semi-empirical, models which model the atmosphere as a hydrostatic equilibrium. Some models like the Air Force's High Accuracy Satellite Drag Model (HASDM) try to estimate and predict a dynamically varying high-resolution density field. We present a simple system for predicting neutral density along a satellite track based on solar, Joule, and particle heating. In an earlier study, we used a portion of the CHAMP satellite data set to develop parameters for estimating the neutral density at 450 km, and then used those parameters to estimate the neutral density during a different portion of the CHAMP data set. We developed a skill score to determine how well we were able to predict the neutral density along the CHAMP trajectory. We extend this work to a set of 18 satellites where we know their daily average densities to about 2-3% uncertainty.

Chun, F. K.; McHarg, M. G.; Knipp, D. J.; Bowman, B. R.

2004-12-01

100

Asymptotic Equivalence of Density Estimation and Gaussian White Noise

Asymptotic Equivalence of Density Estimation and Gaussian White Noise Michael Nussbaum Weierstrass Institute, Berlin September 1995 Abstract Signal recovery in Gaussian white noise with variance tending with density f is globally asymptotically equivalent to a white noise experiment with drift f1/2 and variance 1

Nussbaum, Michael

101

MODEL-BASED CLUSTERING, DISCRIMINANT ANALYSIS, AND DENSITY ESTIMATION

for groupings of customers and products in massive retail datasets, document clustering and the analysis of \\VebMODEL-BASED CLUSTERING, DISCRIMINANT ANALYSIS, AND DENSITY ESTIMATION by Chris Fraley Adrian E Seattle, Washington 98195 USA #12;#12;Model-Based Clustering, Discriminant Analysis, and Density

Washington at Seattle, University of

102

Single-trial evoked potential estimation using wavelets.

In this paper we present conventional and translation-invariant (TI) wavelet-based approaches for single-trial evoked potential estimation based on intracortical recordings. We demonstrate that the wavelet-based approaches outperform several existing methods including the Wiener filter, least mean square (LMS), and recursive least squares (RLS), and that the TI wavelet-based estimates have higher SNR and lower RMSE than the conventional wavelet-based estimates. We also show that multichannel averaging significantly improves the evoked potential estimation, especially for the wavelet-based approaches. The excellent performances of the wavelet-based approaches for extracting evoked potentials are demonstrated via examples using simulated and experimental data. PMID:16987507

Wang, Zhisong; Maier, Alexander; Leopold, David A; Logothetis, Nikos K; Liang, Hualou

2007-04-01

103

Ultrasonic velocity for estimating density of structural ceramics

NASA Technical Reports Server (NTRS)

The feasibility of using ultrasonic velocity as a measure of bulk density of sintered alpha silicon carbide was investigated. The material studied was either in the as-sintered condition or hot isostatically pressed in the temperature range from 1850 to 2050 C. Densities varied from approximately 2.8 to 3.2 g cu cm. Results show that the bulk, nominal density of structural grade silicon carbide articles can be estimated from ultrasonic velocity measurements to within 1 percent using 20 MHz longitudinal waves and a commercially available ultrasonic time intervalometer. The ultrasonic velocity measurement technique shows promise for screening out material with unacceptably low density levels.

Klima, S. J.; Watson, G. K.; Herbell, T. P.; Moore, T. J.

1981-01-01

104

Non-iterative wavelet-based deconvolution for sparse aperturesystem

NASA Astrophysics Data System (ADS)

Optical sparse aperture imaging is a promising technology to obtain high resolution but with a significant reduction in size and weight by minimizing the total light collection area. However, with the decreasing of collection area, its OTF is also greatly attenuated, and thus the directly imaging quality of sparse aperture system is very poor. In this paper, we focus on the post-processing methods for sparse aperture systems, and propose a non-iterative wavelet-based deconvolution algorithm. The algorithm is performed by adaptively denoising the Fourier-based deconvolution results on the wavelet basis. We set up a Golay-3 sparse-aperture imaging system, where the imaging and deconvolution experiments of the natural scenes are performed. The experiments demonstrate that the proposed method has greatly improved the imaging quality of Golay-3 sparse-aperture system, and produce satisfactory visual quality. Furthermore, our experimental results also indicate that the sparse aperture system has the potential to reach higher resolution with the help of better post-processing deconvolution techniques.

Xu, Wenhai; Zhao, Ming; Li, Hongshu

2013-05-01

105

Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

NASA Astrophysics Data System (ADS)

Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

2011-05-01

106

An image adaptive, wavelet-based watermarking of digital images

NASA Astrophysics Data System (ADS)

In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

2007-12-01

107

Complex wavelet based speckle reduction using multiple ultrasound images

NASA Astrophysics Data System (ADS)

Ultrasound imaging is a dominant tool for diagnosis and evaluation in medical imaging systems. However, as its major limitation is that the images it produces suffer from low quality due to the presence of speckle noise, to provide better clinical diagnoses, reducing this noise is essential. The key purpose of a speckle reduction algorithm is to obtain a speckle-free high-quality image whilst preserving important anatomical features, such as sharp edges. As this can be better achieved using multiple ultrasound images rather than a single image, we introduce a complex wavelet-based algorithm for the speckle reduction and sharp edge preservation of two-dimensional (2D) ultrasound images using multiple ultrasound images. The proposed algorithm does not rely on straightforward averaging of multiple images but, rather, in each scale, overlapped wavelet detail coefficients are weighted using dynamic threshold values and then reconstructed by averaging. Validation of the proposed algorithm is carried out using simulated and real images with synthetic speckle noise and phantom data consisting of multiple ultrasound images, with the experimental results demonstrating that speckle noise is significantly reduced whilst sharp edges without discernible distortions are preserved. The proposed approach performs better both qualitatively and quantitatively than previous existing approaches.

Uddin, Muhammad Shahin; Tahtali, Murat; Pickering, Mark R.

2014-04-01

108

Non-local crime density estimation incorporating housing information

Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

2014-01-01

109

Wavelet-based progressive image and video coding using trellis-coded space-frequency quantization

WAVELET-BASED PROGRESSIVE IMAGE AND VIDEO CODING USING TRELLIS-CODED SPACE-FREQUENCY QUANTIZATION A Thesis by PIERRE SEIGNEURBIEUX Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE December 2000 Major Subject: Electrical Engineering WAVELET-BASED PROGRESSIVE IMAGE AND VIDEO CODING USING TRELLIS-CODED SPACE-FREQUENCY QUANTIZATION A Thesis by PIERRE SEIGNEURBIEUX Submitted to Texas A&M University...

Seigneurbieux, Pierre

2012-06-07

110

A Novel Neuro-Wavelet Based Self-Tuned Wavelet Controller for IPM Motor Drives

This paper presents a hybrid neuro-wavelet scheme for on-line tuning of a wavelet-based multiresolution PID (MRPID) controller in real-time for precise speed control of an interior permanent magnet synchronous motor (IPMSM) drive system under system uncertainties. In the wavelet-based MRPID controller, the discrete wavelet transform (DWT) is used to decompose the error between actual and command speeds into different frequency

M. Khan; M. A. Rahman

2008-01-01

111

Wavelet-Based Multi-View Video Coding with Spatial Scalability

In this paper, we propose two wavelet-based frameworks which allow fully scalable multi-view video coding. Using a 4-D wavelet transform, both schemes generate a bitstream that can be truncated to achieve a temporally, view-directionally, and\\/or spatially downscaled representation of the coded multi-view video sequence. Well-known wavelet-based scalable coding schemes for single-view video sequences have been adopted and extended to match

Jens-Uwe Garbas; Andre Kaup

2007-01-01

112

Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired. PMID:23669913

Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

2013-05-01

113

A Morpho-Density Approach to Estimating Neural Connectivity

Neuronal signal integration and information processing in cortical neuronal networks critically depend on the organization of synaptic connectivity. Because of the challenges involved in measuring a large number of neurons, synaptic connectivity is difficult to determine experimentally. Current computational methods for estimating connectivity typically rely on the juxtaposition of experimentally available neurons and applying mathematical techniques to compute estimates of neural connectivity. However, since the number of available neurons is very limited, these connectivity estimates may be subject to large uncertainties. We use a morpho-density field approach applied to a vast ensemble of model-generated neurons. A morpho-density field (MDF) describes the distribution of neural mass in the space around the neural soma. The estimated axonal and dendritic MDFs are derived from 100,000 model neurons that are generated by a stochastic phenomenological model of neurite outgrowth. These MDFs are then used to estimate the connectivity between pairs of neurons as a function of their inter-soma displacement. Compared with other density-field methods, our approach to estimating synaptic connectivity uses fewer restricting assumptions and produces connectivity estimates with a lower standard deviation. An important requirement is that the model-generated neurons reflect accurately the morphology and variation in morphology of the experimental neurons used for optimizing the model parameters. As such, the method remains subject to the uncertainties caused by the limited number of neurons in the experimental data set and by the quality of the model and the assumptions used in creating the MDFs and in calculating estimating connectivity. In summary, MDFs are a powerful tool for visualizing the spatial distribution of axonal and dendritic densities, for estimating the number of potential synapses between neurons with low standard deviation, and for obtaining a greater understanding of the relationship between neural morphology and network connectivity. PMID:24489738

Tarigan, Bernadetta; van Pelt, Jaap; van Ooyen, Arjen; de Gunst, Mathisca

2014-01-01

114

Nonparametric probability density estimation by optimization theoretic techniques

NASA Technical Reports Server (NTRS)

Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

Scott, D. W.

1976-01-01

115

Improving 3D Wavelet-Based Compression of Hyperspectral Images

NASA Technical Reports Server (NTRS)

Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

2009-01-01

116

Estimation of volumetric breast density for breast cancer risk prediction

NASA Astrophysics Data System (ADS)

Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the true, volumetric quantity of dense tissue in the breast. A computerized method to estimate the amount of radiographically dense tissue in the overall volume of the breast has been developed to provide an automatic, user-independent tool for breast cancer risk assessment. The procedure for volumetric density estimation consists of first correcting the image for inhomogeneity, then performing a volume density calculation. First, optical sensitometry is used to convert all images to the logarithm of relative exposure (LRE), in order to simplify the image correction operations. The field non-uniformity correction, which takes into account heel effect, inverse square law, path obliquity and intrinsic field and grid non- uniformity is obtained by imaging a spherical section PMMA phantom. The processed LRE image of the phantom is then used as a correction offset for actual mammograms. From information about the thickness and placement of the breast, as well as the parameters of a breast-like calibration step wedge placed in the mammogram, MD of the breast is calculated. Post processing and a simple calibration phantom enable user- independent, reliable and repeatable volumetric estimation of density in breast-equivalent phantoms. Initial results obtained on known density phantoms show the estimation to vary less than 5% in MD from the actual value. This can be compared to estimated mammographic density differences of 30% between the true and non-corrected values. Since a more simplistic breast density measurement based on the projected area has been shown to be a strong indicator of breast cancer risk (RR equals 4), it is believed that the current volumetric technique will provide an even better indicator. Such an indicator can be used in determination of the method and frequency of breast cancer screening, and might prove useful in measuring the effect of intervention measures such as drug therapy or dietary change on breast cancer risk.

Pawluczyk, Olga; Yaffe, Martin J.; Boyd, Norman F.; Jong, Roberta A.

2000-04-01

117

Estimating neuronal connectivity from axonal and dendritic density fields

Neurons innervate space by extending axonal and dendritic arborizations. When axons and dendrites come in close proximity of each other, synapses between neurons can be formed. Neurons vary greatly in their morphologies and synaptic connections with other neurons. The size and shape of the arborizations determine the way neurons innervate space. A neuron may therefore be characterized by the spatial distribution of its axonal and dendritic “mass.” A population mean “mass” density field of a particular neuron type can be obtained by averaging over the individual variations in neuron geometries. Connectivity in terms of candidate synaptic contacts between neurons can be determined directly on the basis of their arborizations but also indirectly on the basis of their density fields. To decide when a candidate synapse can be formed, we previously developed a criterion defining that axonal and dendritic line pieces should cross in 3D and have an orthogonal distance less than a threshold value. In this paper, we developed new methodology for applying this criterion to density fields. We show that estimates of the number of contacts between neuron pairs calculated from their density fields are fully consistent with the number of contacts calculated from the actual arborizations. However, the estimation of the connection probability and the expected number of contacts per connection cannot be calculated directly from density fields, because density fields do not carry anymore the correlative structure in the spatial distribution of synaptic contacts. Alternatively, these two connectivity measures can be estimated from the expected number of contacts by using empirical mapping functions. The neurons used for the validation studies were generated by our neuron simulator NETMORPH. An example is given of the estimation of average connectivity and Euclidean pre- and postsynaptic distance distributions in a network of neurons represented by their population mean density fields. PMID:24324430

van Pelt, Jaap; van Ooyen, Arjen

2013-01-01

118

Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

NASA Astrophysics Data System (ADS)

temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

2014-12-01

119

Contributed Paper Estimating the Density of Honeybee Colonies across

Contributed Paper Estimating the Density of Honeybee Colonies across Their Natural Range to Fill the Gap in Pollinator Decline Censuses RODOLFO JAFFÂ´E, VINCENT DIETEMANN, MIKE H. ALLSOPP,Â§ CECILIA, University of Pretoria, Pretoria 0002, South Africa Â§Honeybee Research Section, ARC-Plant Protection Research

Paxton, Robert

120

Density estimation in tiger populations: combining information for strong inference

A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

2012-01-01

121

Extracting galactic structure parameters from multivariated density estimation

NASA Technical Reports Server (NTRS)

Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

1992-01-01

122

Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography (?CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional (2D) structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct three-dimensional (3D) distance transformation methods (?CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and ?CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution. PMID:17281896

Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J; Moss, W; Majumdar, S

2005-01-01

123

Online Discriminative Kernel Density Estimator With Gaussian Kernels.

We propose a new method for a supervised online estimation of probabilistic discriminative models for classification tasks. The method estimates the class distributions from a stream of data in the form of Gaussian mixture models (GMMs). The reconstructive updates of the distributions are based on the recently proposed online kernel density estimator (oKDE). We maintain the number of components in the model low by compressing the GMMs from time to time. We propose a new cost function that measures loss of interclass discrimination during compression, thus guiding the compression toward simpler models that still retain discriminative properties. The resulting classifier thus independently updates the GMM of each class, but these GMMs interact during their compression through the proposed cost function. We call the proposed method the online discriminative kernel density estimator (odKDE). We compare the odKDE to oKDE, batch state-of-the-art kernel density estimators (KDEs), and batch/incremental support vector machines (SVM) on the publicly available datasets. The odKDE achieves comparable classification performance to that of best batch KDEs and SVM, while allowing online adaptation from large datasets, and produces models of lower complexity than the oKDE. PMID:23757555

Kristan, Matej; Leonardis, Ales

2013-04-29

124

A Wavelet-Based Noise Reduction Algorithm and Its Clinical Evaluation in Cochlear Implants

Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise. PMID:24086605

Ye, Hua; Deng, Guang; Mauger, Stefan J.; Hersbach, Adam A.; Dawson, Pam W.; Heasman, John M.

2013-01-01

125

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

126

Feature selection for neural networks using Parzen density estimator

NASA Technical Reports Server (NTRS)

A feature selection method for neural networks is proposed using the Parzen density estimator. A new feature set is selected using the decision boundary feature selection algorithm. The selected feature set is then used to train a neural network. Using a reduced feature set, an attempt is made to reduce the training time of the neural network and obtain a simpler neural network, which further reduces the classification time for test data.

Lee, Chulhee; Benediktsson, Jon A.; Landgrebe, David A.

1992-01-01

127

Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters. PMID:18784025

Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

2008-10-01

128

Some Bayesian statistical techniques useful in estimating frequency and density

This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

Johnson, D.H.

1977-01-01

129

Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar

NASA Astrophysics Data System (ADS)

The unique coupling of the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the EISCAT Svalbard Radar (ESR) using the year-long operations of the first year of the International Polar Year (IPY) from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradient and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of IPY during the recent very quiet solar minimum. Comparison with the MSIS model shows that at 250 km, close to the F-layer peak the ESR estimates of the atomic oxygen density are typically a factor 1.2 smaller than the MSIS model when data are averaged over the IPY. Differences between MSIS and ESR estimates are found also to depend on both season and magnetic disturbance, with largest discrepancies noted during winter months. At 350 km, very close agreement with the MSIS model is achieved without evidence of seasonal dependence. This altitude was also close to the orbital altitude of the CHAMP satellite during IPY, allowing a comparison of in-situ measurements and radar estimates of the neutral density. Using a total of 10 in-situ passes by the CHAMP satellite above Svalbard, we show that the estimates made using this technique fall within the error bars of the measurements. We show that the method works best in the height range ~300-400 km where our assumptions are satisfied and we anticipate that the technique should be suitable for future thermospheric studies related to geomagnetic storm activity and long-term climate change.

Vickers, H.; Kosch, M. J.; Sutton, E. K.; Ogawa, Y.; La Hoz, C.

2012-12-01

130

Estimating black bear density using DNA data from hair snares

DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

Gardner, B.; Royle, J.A.; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

2010-01-01

131

Structural Reliability Using Probability Density Estimation Methods Within NESSUS

NASA Technical Reports Server (NTRS)

A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

2003-01-01

132

Two- or three-dimensional wavelet transforms have been considered as a basis for multiple hypothesis testing of parametric maps derived from functional magnetic resonance imaging (fMRI) experiments. Most of the previous approaches have assumed that the noise variance is equally distributed across levels of the transform. Here we show that this assumption is unrealistic; fMRI parameter maps typically have more similarity to a 1/f-type spatial covariance with greater variance in 2D wavelxet coefficients representing lower spatial frequencies, or coarser spatial features, in the maps. To address this issue we resample the fMRI time series data in the wavelet domain (using a 1D discrete wavelet transform [DWT]) to produce a set of permuted parametric maps that are decomposed (using a 2D DWT) to estimate level-specific variances of the 2D wavelet coefficients under the null hypothesis. These resampling-based estimates of the “wavelet variance spectrum” are substituted in a Bayesian bivariate shrinkage operator to denoise the observed 2D wavelet coefficients, which are then inverted to reconstitute the observed, denoised map in the spatial domain. Multiple hypothesis testing controlling the false discovery rate in the observed, denoised maps then proceeds in the spatial domain, using thresholds derived from an independent set of permuted, denoised maps. We show empirically that this more realistic, resampling-based algorithm for wavelet-based denoising and multiple hypothesis testing has good Type I error control and can detect experimentally engendered signals in data acquired during auditory-linguistic processing. PMID:17651989

?endur, Levent; Suckling, John; Whitcher, Brandon; Bullmore, Ed

2008-01-01

133

Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

NASA Technical Reports Server (NTRS)

The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

Mahmoud, Saad; Hi, Jianjun

2012-01-01

134

of Pima diabetes data Discussion Multivariate density estimation via copulas Peter Hoff Statistics estimation Example: Imputation of Pima diabetes data Discussion Outline Introduction to Copulas Parameterization of Copulas Parameter estimation Example: Imputation of Pima diabetes data Discussion #12

Hoff, Peter

135

Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar

NASA Astrophysics Data System (ADS)

Coupling between the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the European Incoherent Scatter Svalbard Radar (ESR) using the year-long operations of the International Polar Year from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradients and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of the International Polar Year during the recent very quiet solar minimum. The method works best in the height range ~300-400 km where our assumptions are satisfied. Differences between Mass Spectrometer and Incoherent Scatter and ESR estimates are found to vary with altitude, season, and magnetic disturbance, with the largest discrepancies during the winter months. A total of 9 out of 10 in situ passes by the CHAMP satellite above Svalbard at 350 km altitude agree with the ESR neutral density estimates to within the error bars of the measurements during quiet geomagnetic periods.

Vickers, H.; Kosch, M. J.; Sutton, E.; Ogawa, Y.; La Hoz, C.

2013-03-01

136

Estimating low-density snowshoe hare populations using fecal pellet counts

Snowshoe hare (Lepus americanus) populations found at high densities can be estimated using fecal pellet densities on rectangular plots, but this method has yet to be evaluated for low-density populations. We further tested the use of fecal pellet plots for estimating hare populations by correlating pellet densities with estimated hare numbers on 12 intensive study areas in Idaho; pellet counts

Dennis L. Murray; James D. Roth; Ethan Ellsworth; Aaron J. Wirsing; Todd D. Steury

2002-01-01

137

Direct Density-Ratio Estimation with Dimensionality Reduction via Hetero-Distributional Subspace, and conditional probability estimation. In this paper, we propose a new density-ratio estimator which incorporates dimensionality reduction into the density- ratio estimation procedure. Through experiments, the proposed method

Sugiyama, Masashi

138

Estimation of probability densities using scale-free field theories.

The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided. PMID:25122244

Kinney, Justin B

2014-07-01

139

Dust-cloud density estimation using a single wavelength lidar

NASA Astrophysics Data System (ADS)

The passage of commercial and military aircraft through invisible fresh volcanic ash clouds has caused damage to many airplanes. On December 15, 1989 all four engines of a KLM Boeing 747 were temporarily extinguished in a flight over Alaska resulting in $DOL80 million for repair. Similar aircraft damage to control systems, FLIR/EO windows, wind screens, radomes, aircraft leading edges, and aircraft data systems were reported in Operation Desert Storm during combat flights through high-explosive and naturally occurring desert dusts. The Defense Nuclear Agency is currently developing a compact and rugged lidar under the Aircraft Sensors Program to detect and estimate the mass density of nuclear-explosion produced dust clouds, high-explosive produced dust clouds, and fresh volcanic dust clouds at horizontal distances of up to 40 km from an aircraft. Given this mass density information, the pilot has an option of avoiding or flying through the upcoming cloud.

Youmans, Douglas G.; Garner, Richard C.; Petersen, Kent R.

1994-09-01

140

A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images

A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images Hammad of samples is an im- portant problem in the domain of histological image classification. This issue is inherent to the field due to the high complexity of histology im- age data. A technique that provides good

Rajpoot, Nasir

141

A Bayesian wavelet-based multidimensional deconvolution with sub-band Yingsong Zhang, Nick Kingsbury

], with Shannon wavelets. Based on the work of Daubechies et al.[1] and Vonesch & Unser[6], we formulate wavelet transform[4] (DT CWT) instead of the Shannon wavelet[6]. The DT CWT is different from the Shannon. Thanks for CSC scholarship. but equivalent to the Shannon wavelet in that it is: 1) a tight frame

Kingsbury, Nick

142

Nonenhanced computerized tomography (CT) exams were used to detect acute stroke by notification of hypodense area. Infarction perception improvement by data denoising and local contrast enhancement in multi-scale domain was proposed. The wavelet-based image processing method enhanced the subtlest signs of hypodensity, which were often invisible in standard CT scan review. Thus improved detection efficiency of perceptual ischemic changes was

A. Przelaskowski; K. Sklinda; P. Bargie?; J. Walecki; M. Biesiadko-Matuszewska; M. Kazubek

2007-01-01

143

Reduced Complexity Wavelet-Based Predictive Coding of Hyperspectral Images for FPGA Implementation

We present an algorithm for lossy compression of hyperspectral images for imple- mentation on fieldReduced Complexity Wavelet-Based Predictive Coding of Hyperspectral Images for FPGA Implementation is compressed using the Set Partitioning in Hierarchical Trees algorithm. To reduce the complexity

Hauck, Scott

144

Reduced Complexity Wavelet-Based Predictive Coding of Hyperspectral Images for FPGA Implementation

an algorithm for lossy compression of hyperspectral images for imple- mentation on field programmable gateReduced Complexity Wavelet-Based Predictive Coding of Hyperspectral Images for FPGA Implementation collects and stores large amounts of hyperspectral data. For example, one Moderate Resolution Imaging

Hauck, Scott

145

Wavelet-Based Nonlinear Multiscale Decomposition Model for Electricity Load Forecasting

1 Wavelet-Based Nonlinear Multiscale Decomposition Model for Electricity Load Forecasting D autoregressive approach for the prediction of one-hour ahead ahead load based on historical electricity load data updated. We assess results produced by this multiscale autoregressive (MAR) method, in both linear and non-linear

Murtagh, Fionn

146

WAVELET-BASED FOVEATED IMAGE QUALITY MEASUREMENT FOR REGION OF INTEREST IMAGE CODING

WAVELET-BASED FOVEATED IMAGE QUALITY MEASUREMENT FOR REGION OF INTEREST IMAGE CODING Zhou Wang1 "fixated" by human eyes, the foveation property of the HVS supplies a natural approach for guiding and enhancement of ROI coded images and videos. We show its effectiveness by applying it to an embedded foveated

Wang, Zhou

147

Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

2008-01-01

148

The probabilistic estimate of the solvent content (Matthews probability) was first introduced in 2003. Given that the Matthews probability is based on prior information, revisiting the empirical foundation of this widely used solvent-content estimate is appropriate. The parameter set for the original Matthews probability distribution function employed in MATTPROB has been updated after ten years of rapid PDB growth. A new nonparametric kernel density estimator has been implemented to calculate the Matthews probabilities directly from empirical solvent-content data, thus avoiding the need to revise the multiple parameters of the original binned empirical fit function. The influence and dependency of other possible parameters determining the solvent content of protein crystals have been examined. Detailed analysis showed that resolution is the primary and dominating model parameter correlated with solvent content. Modifications of protein specific density for low molecular weight have no practical effect, and there is no correlation with oligomerization state. A weak, and in practice irrelevant, dependency on symmetry and molecular weight is present, but cannot be satisfactorily explained by simple linear or categorical models. The Bayesian argument that the observed resolution represents only a lower limit for the true diffraction potential of the crystal is maintained. The new kernel density estimator is implemented as the primary option in the MATTPROB web application at http://www.ruppweb.org/mattprob/. PMID:24914969

Weichenberger, Christian X; Rupp, Bernhard

2014-06-01

149

Nonparametric estimation of multivariate scale mixtures of uniform densities

Suppose that U = (U1, … , Ud) has a Uniform ([0, 1]d) distribution, that Y = (Y1, … , Yd) has the distribution G on R+d, and let X = (X1, … , Xd) = (U1Y1, … , UdYd). The resulting class of distributions of X (as G varies over all distributions on R+d) is called the Scale Mixture of Uniforms class of distributions, and the corresponding class of densities on R+d is denoted by FSMU(d). We study maximum likelihood estimation in the family FSMU(d). We prove existence of the MLE, establish Fenchel characterizations, and prove strong consistency of the almost surely unique maximum likelihood estimator (MLE) in FSMU(d). We also provide an asymptotic minimax lower bound for estimating the functional f ? f(x) under reasonable differentiability assumptions on f ? FSMU(d) in a neighborhood of x. We conclude the paper with discussion, conjectures and open problems pertaining to global and local rates of convergence of the MLE. PMID:22485055

Pavlides, Marios G.; Wellner, Jon A.

2012-01-01

150

Estimating tropical-forest density profiles from multibaseline interferometric SAR

NASA Technical Reports Server (NTRS)

Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

2006-01-01

151

NASA Astrophysics Data System (ADS)

The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work. The scalability and speedup studies of this powerful parallel PDE solver are performed on various architectures. Furthermore, Reynolds scaling of active spatial modes of both CVS and SCALES of linearly forced homogeneous turbulence at high Reynolds numbers is investigated for the first time. This computational complexity study, by demonstrating very promising slope for Reynolds scaling of SCALES even at constant level of fidelity for SGS dissipation, proves the argument that SCALES as a dynamically adaptive turbulence modeling technique, can offer a plethora of flexibilities in hierarchical multiscale space/time adaptive variable fidelity simulations of high Reynolds number turbulent flows.

Nejadmalayeri, Alireza

152

Change-in-ratio density estimator for feral pigs is less biased than closed mark–recapture estimates

Abstract. Closed-population capture–mark–recapture (CMR) methods,can produce,biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for

Laura B. Hanson; James B. Grand; Michael S. Mitchell; D. Buck Jolley; Bill D. Sparklin; Stephen S. Ditchkoff

2008-01-01

153

Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density

Laura B. HansonA; James B. GrandB; Michael S. MitchellC; D. Buck; Bill D. SparklinD; Stephen S. DitchkoffA

154

Estimating Foreign-Object-Debris Density from Photogrammetry Data

NASA Technical Reports Server (NTRS)

Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

Long, Jason; Metzger, Philip; Lane, John

2013-01-01

155

The analysis of VF and VT with wavelet-based Tsallis information measure [rapid communication

NASA Astrophysics Data System (ADS)

We undertake the study of ventricular fibrillation and ventricular tachycardia by recourse to wavelet-based multiresolution analysis. Comparing with conventional Shannon entropy analysis of signal, we proposed a new application of Tsallis entropy analysis. It is shown that, as a criteria for detecting between ventricular fibrillation and ventricular tachycardia, Tsallis' multiresolution entropy (MRET) provides one with better discrimination power than the Shannon's multiresolution entropy (MRE).

Huang, Hai; Xie, Hongbo; Wang, Zhizhong

2005-03-01

156

Model-free stochastic processes studied with q-wavelet-based informational tools

NASA Astrophysics Data System (ADS)

We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way, (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework.

Pérez, D. G.; Zunino, L.; Martín, M. T.; Garavaglia, M.; Plastino, A.; Rosso, O. A.

2007-04-01

157

Wavelet-based compression of medical images: filter-bank selection and evaluation

Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm\\u000a for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet\\u000a filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and\\u000a low computational

A. Saffor; A. R. bin Ramli; K. H. Ng

2003-01-01

158

Wavelet-based Feature Analysis for Classification of Breast Masses from Normal Dense Tissue

Abstract. Automated ,detection of masses ,on mammograms ,is challenged ,by the presence ,of dense ,breast parenchyma. The aim ,of this ,study was ,to investigate,the ,feasibility of using ,wavelet-based ,feature analysis ,for differentiating masses, of varying sizes, from normal dense tissue on mammograms.,The dataset analyzed consists of 166 regions of interest (ROIs) containing spiculated masses (60), circumscribed masses (40)and normal dense

Filippos Sakellaropoulos; Spyros Skiadopoulos; Anna Karahaliou; George Panayiotakis; Lena Costaridou

2006-01-01

159

Cancer detection using mammography focuses on character- istics of tiny microcalciflcations, including the number, size, and spa- tial arrangement of microcalciflcation clusters as well as morphologi- cal features of individual microcalciflcations. We developed state-of-the- art wavelet-based methods to enhance the resolution of microcalciflca- tions visible in digital mammograms, thereby improving the speciflcity of breast cancer diagnoses. In our research, we

Gordana Derado; F. Dubois Bowman; Rajan Patel; Mary Newell; Brani Vidakovic

2007-01-01

160

NASA Astrophysics Data System (ADS)

The finite-difference time-domain (FDTD) method, which solves time-dependent Maxwell's curl equations numerically, has been proved to be a highly efficient technique for numerous applications in electromagnetic. Despite the simplicity of the FDTD method, this technique suffers from serious limitations in case that substantial computer resource is required to solve electromagnetic problems with medium or large computational dimensions, for example in high-index optical devices. In our work, an efficient wavelet-based FDTD model has been implemented and extended in a parallel computation environment, to analyze high-index optical devices. This model is based on Daubechies compactly supported orthogonal wavelets and Deslauriers-Dubuc interpolating functions as biorthogonal wavelet bases, and thus is a very efficient algorithm to solve differential equations numerically. This wavelet-based FDTD model is a high-spatial-order FDTD indeed. Because of the highly linear numerical dispersion properties of this high-spatial-order FDTD, the required discretization can be coarser than that required in the standard FDTD method. In our work, this wavelet-based FDTD model achieved significant reduction in the number of cells, i.e. used memory. Also, as different segments of the optical device can be computed simultaneously, there was a significant gain in computation time. Substantially, we achieved speed-up factors higher than 30 in comparisons to using a single processor. Furthermore, the efficiency of the parallelized computation such as the influence of the discretization and the load sharing between different processors were analyzed. As a conclusion, this parallel-computing model is promising to analyze more complicated optical devices with large dimensions.

Ren, Rong; Wang, Jin; Jiang, Xiyan; Lu, Yunqing; Xu, Ji

2014-10-01

161

Wavelet-based multi-view video coding with full scalability and illumination compensation

A wavelet-based approach to scalable multi-view video cod- ing (MVC) is examined in this paper. A 4-D wavelet trans- form is used to decorrelate the multi-view video data tempo- rally, view-directionally, and spatially for efficient compres- sion. Motion compensated temporal filtering (MCTF) is ap- plied to each video sequence of each camera to exploit tem- poral correlation and inter-view dependencies

Jens-uwe Garbas; Ulrich Fecker; André Kaup

2007-01-01

162

Wavelet-based efficient simulation of electromagnetic transients in a lightning protection system

In this paper, a wavelet-based efficient simulation of electromagnetic transients in a lightning protection systems (LPS) is presented. The analysis of electromagnetic transients is carried out by employing the thin-wire electric field integral equation in frequency domain. In order to easily handle the boundary conditions of the integral equation, semiorthogonal compactly supported spline wavelets, constructed for the bounded interval [0,1],

Guido Ala; Maria L. Di Silvestre; Elisa Francomano; Adele Tortorici

2003-01-01

163

Phase shifting interferometry is combined with wavelet-based image processing techniques to extract precise phase information for applications of moireinterferometry. Specifically, a diffraction grating identical to the specimen grating is used to introduce the additional phase shifts needed to implement phase shifting moireinterferometry. The phase map is calculated with the four-step phase shifting algorithm with 90-deg relative shifts between adjacent frames.

Heng Liu; Alexander N. Cartwright; Cemal Basaran

2004-01-01

164

Analysis of wavelet-based denoising techniques as applied to a radar signal pulse

NASA Astrophysics Data System (ADS)

The purpose of the research is to study the effects of three wavelet-based denoising techniques on the structure of a radar signal pulse. The radar signal pulse is 50 microsecond(s) ec in duration with 2.0 MHz of Linear Frequency Modulation on Pulse. The Signal-to-Noise Ratio of the signal is fixed at 0.7. The comparison is accomplished in the time-domain and the FFT domain. In addition, the output from a FM Demodulator is examined. The comparisons are performed based upon MSE calculations and a visual inspection of the resulting signals. A comparison between the results outlined above and an ideal bandpass filter is also performed. A final comparison is discussed which compares the wavelet- based results outlined above and the results obtained from a bandpass filter that are offset in center frequency. The wavelet-based techniques can be shown to provide an advantage in visually detecting the radar signal pulse in low SNR environments over the results obtained from a bandpass filter approach in which the ideal filter characteristics are not known. All work is accomplished in MATLABTM.

Steinbrunner, Lori A.; Scarpino, Frank F.

1999-09-01

165

Wavelet-based nearest-regularized subspace for noise-robust hyperspectral image classification

NASA Astrophysics Data System (ADS)

A wavelet-based nearest-regularized-subspace classifier is proposed for noise-robust hyperspectral image (HSI) classification. The nearest-regularized subspace, coupling the nearest-subspace classification with a distance-weighted Tikhonov regularization, was designed to only consider the original spectral bands. Recent research found that the multiscale wavelet features [e.g., extracted by redundant discrete wavelet transformation (RDWT)] of each hyperspectral pixel are potentially very useful and less sensitive to noise. An integration of wavelet-based features and the nearest-regularized-subspace classifier to improve the classification performance in noisy environments is proposed. Specifically, wealthy noise-robust features provided by RDWT based on hyperspectral spectrum are employed in a decision-fusion system or as preprocessing for the nearest-regularized-subspace (NRS) classifier. Improved performance of the proposed method over the conventional approaches, such as support vector machine, is shown by testing several HSIs. For example, the NRS classifier performed with an accuracy of 65.38% for the AVIRIS Indian Pines data with 75 training samples per class under noisy conditions (signal-to-noise ratio=36.87 dB), while the wavelet-based classifier can obtain an accuracy of 71.60%, resulting in an improvement of approximately 6%.

Li, Wei; Liu, Kui; Su, Hongjun

2014-01-01

166

Comparative study of different wavelet based neural network models for rainfall-runoff modeling

NASA Astrophysics Data System (ADS)

The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

2014-07-01

167

Application of wavelet-based multiple linear regression model to rainfall forecasting in Australia

NASA Astrophysics Data System (ADS)

In this study, a wavelet-based multiple linear regression model is applied to forecast monthly rainfall in Australia by using monthly historical rainfall data and climate indices as inputs. The wavelet-based model is constructed by incorporating the multi-resolution analysis (MRA) with the discrete wavelet transform and multiple linear regression (MLR) model. The standardized monthly rainfall anomaly and large-scale climate index time series are decomposed using MRA into a certain number of component subseries at different temporal scales. The hierarchical lag relationship between the rainfall anomaly and each potential predictor is identified by cross correlation analysis with a lag time of at least one month at different temporal scales. The components of predictor variables with known lag times are then screened with a stepwise linear regression algorithm to be selectively included into the final forecast model. The MRA-based rainfall forecasting method is examined with 255 stations over Australia, and compared to the traditional multiple linear regression model based on the original time series. The models are trained with data from the 1959-1995 period and then tested in the 1996-2008 period for each station. The performance is compared with observed rainfall values, and evaluated by common statistics of relative absolute error and correlation coefficient. The results show that the wavelet-based regression model provides considerably more accurate monthly rainfall forecasts for all of the selected stations over Australia than the traditional regression model.

He, X.; Guan, H.; Zhang, X.; Simmons, C.

2013-12-01

168

Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics

We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positives voxels. PMID:21034833

Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

2010-01-01

169

Learning Multisensory Integration and Coordinate Transformation via Density Estimation

Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588

Sabes, Philip N.

2013-01-01

170

Wavelet-Based Real-Time Diagnosis of Complex Systems

NASA Technical Reports Server (NTRS)

A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

Gulati, Sandeep; Mackey, Ryan

2003-01-01

171

/ Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% CI = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.KEY WORDS: Bias; Density; Distance sampling; Gray squirrel; Line transect; Sciurus carolinensis. PMID:9336490

Hein

1997-11-01

172

Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool in characterizing and quantifying texture in an image. The purpose of this study was to validate wavelets as a tool in computing trabecular bone thickness directly from gray-level images. To this end, eight cylindrical cores of vertebral trabecular bone were imaged using 3-T magnetic resonance imaging (MRI) and micro-computed tomography (microCT). Thickness measurements of the trabecular bone from the wavelet-based analysis were compared with standard 2D structural parameters analogous to bone histomorphometry (MR images) and direct 3D distance transformation methods (microCT images). Additionally, bone volume fraction was determined using each method. The average difference in trabecular thickness between the wavelet and standard methods was less than the size of 1 pixel size for both MRI and microCT analysis. A correlation (R) of .94 for microCT measurements and that of .52 for MRI were found for the bone volume fraction. Based on these results, we conclude that wavelet-based methods deliver results comparable with those from established MR histomorphometric measurements. Because the wavelet transform is more robust with respect to image noise and operates directly on gray-level images, it could be a powerful tool for computing structural bone parameters from MR images acquired using high resolution and thus limited signal scenarios. PMID:17371730

Krug, Roland; Carballido-Gamio, Julio; Burghardt, Andrew J; Haase, Sebastian; Sedat, John W; Moss, William C; Majumdar, Sharmila

2007-04-01

173

Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE

1 Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE July 4, 2003 Abstract The nonparametric density estimation method proposed in this paper is computationally fast, capable of detect- ing density discontinuities and singularities at a very high resolution

Nowak, Robert

174

Nonparametric estimation of population density for line transect sampling using FOURIER series

A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

1979-01-01

175

ATMOSPHERIC DENSITY ESTIMATION USING SATELLITE PRECISION ORBIT EPHEMERIDES

The current atmospheric density models are not capable enough to accurately model the atmospheric density, which varies continuously in the upper atmosphere mainly due to the changes in solar and geomagnetic activity. ...

Arudra, Anoop Kumar

2011-04-22

176

Atmospheric turbulence mitigation using complex wavelet-based fusion.

Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359

Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R

2013-06-01

177

Wavelet based scalable video coding with spatially scalable motion vectors

NASA Astrophysics Data System (ADS)

The paper studies scalable video coding based on multiresolution video representations generated by multi-scale subband motion compensated temporal filtering (MCTF) and spatial wavelet transform. Since MCTF is performed subband by subband in the spatial wavelet domain, motion vectors are available for reconstructing video sequences of any possible reduced spatial resolution, restricted by the dyadic decomposition pattern and the maximal spatial decomposition level. The multiresolution representations naturally provide a framework with which both spatial scalability and temporal scalability can be very conveniently and efficiently supported by a video coder that utilizes such multiresolution video representations. Such video coders can be fully scalable by incorporating wavelet-domain bit-plane image coding techniques. This paper examines the performance, including scalability and coding efficiency, of a scalable video coder that utilizes such multi-scale video representations together with the EZBC image coder. A wavelet-domain variable block size motion estimation algorithm is introduced to enhance the performance of the subband MCTF. Experiments show that the proposed coder outperforms the state of the art fully scalable coder MC-EZBC in terms of the spatial scalability.

Zhang, Huipin; Bossen, Frank

2003-06-01

178

Wavelets based algorithm for the evaluation of enhanced liver areas

NASA Astrophysics Data System (ADS)

Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. 63 computed tomography (CT) slices from 23 patients were assessed. Noncontrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.

Alvarez, Matheus; Rodrigues de Pina, Diana; Giacomini, Guilherme; Gomes Romeiro, Fernando; Barbosa Duarte, Sérgio; Yamashita, Seizo; de Arruda Miranda, José Ricardo

2014-03-01

179

Estimating plant population density: Time costs and sampling efficiencies for different sized, plant density, plot size, plot shape, sampling efficiency, sampling methods Abstract Given such species, the need to implement powerful and efficient sampling designs has never been greater. Previous

Connor, Edward F.

180

Demonstration of line transect methodologies to estimate urban gray squirrel density

Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.

Hein, E.W. [Los Alamos National Lab., NM (United States)] [Los Alamos National Lab., NM (United States)

1997-11-01

181

Demonstration of Line Transect Methodologies to Estimate Urban Gray Squirrel Density

Sciurus carolinensis ) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and\\u000a determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that\\u000a were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7

Eric W. Hein

1997-01-01

182

A novel 3D wavelet-based filter for visualizing features in noisy biological data.

Summary We have developed a three-dimensional (3D) wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus de-noising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples, including low-contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens. PMID:16159339

Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

2005-08-01

183

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

184

Iterated denoising and fusion to improve the image quality of wavelet-based coding

NASA Astrophysics Data System (ADS)

An iterated denoising and fusion method is presented to improve the image quality of wavelet-based coding. Firstly, iterated image denoising is used to reduce ringing and staircase noise along curving edges and improve edge regularity. Then, we adopt wavelet fusion method to enhance image edges, protect non-edge regions and decrease blurring artifacts during the process of denoising. Experimental results have shown that the proposed scheme is capable of improving both the subjective and the objective performance of wavelet decoders, such as JPEG2000 and SPIHT.

Song, Beibei

2011-06-01

185

Kernel Density Estimations for Visual Analysis of Emergency Response Data

\\u000a The purpose of this chapter is to investigate the calculation and representation of geocoded fire & rescue service missions.\\u000a The study of relationships between the incident distribution and the identification of high (or low) incident density areas\\u000a supports the general emergency preparedness planning and resource allocation. Point density information can be included into\\u000a broad risk analysis procedures, which consider the

Jukka M. Krisp; Olga Špatenková

186

Estimating option implied risk-neutral densities using spline and hypergeometric functions

Summary We examine the ability of two recent methods – the smoothed implied volatility smile method (SML) and the density functionals based on confluent hypergeometric functions (DFCH) – for estimating implied risk-neutral densities (RNDs) from European-style options. Two complementary Monte Carlo experiments are conducted and the performance of the two RND estimators is evaluated by the root mean integrated squared

RUIJUN BUAND; Kaddour Hadri

2007-01-01

187

Characterization of a maximum-likelihood nonparametric density estimator of kernel type

NASA Technical Reports Server (NTRS)

Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

Geman, S.; Mcclure, D. E.

1982-01-01

188

Techniques and Technology Article Road-Based Surveys for Estimating Wild Turkey Density

Techniques and Technology Article Road-Based Surveys for Estimating Wild Turkey Density-transectÂbased distance sampling has been used to estimate density of several wild bird species including wild turkeys (Meleagris gallopavo). We used inflatable turkey decoys during autumn (AugÂNov) and winter (DecÂMar) 2003

Butler, Matthew J.

189

The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

2012-01-01

190

Posterior Density Estimation for a Class of On-line Quality Control Models

NASA Astrophysics Data System (ADS)

On-line quality control during production calls for a periodical monitoring of the produced items according to some prescribed strategy. It is reasonable to assume the existence of internal non-observable variables so that the carried out monitoring is only partially reliable. Under the setting of a Hidden Markov Model (HMM), posterior density estimates are obtained via particle filter type algorithms. Making use of kernel density methods the stable regime densities are approximated and false-alarm probabilities are estimated.

Dorea, Chang C. Y.; Santos, Walter B.

2011-11-01

191

Density-ratio robustness in dynamic state estimation Alessio Benavoli and Marco Zaffalon

closed convex set of probabilities that is known with the name of density ratio class or constant odds-ratioDensity-ratio robustness in dynamic state estimation Alessio Benavoli and Marco Zaffalon Istituto of distributions. Second, after revising the properties of the density ratio class in the context of parametric

Zaffalon, Marco

192

Two versions of a stage-structured model of Cirsium vulgare population dynamics were developed. Both incorporated density dependence at one stage in the life cycle of the plant. In version 1 density dependence was assumed to operate during germination whilst in version 2 it was included at the seedling stage. Density-dependent parameter values for the model were estimated from annual census

M. Gillman; J. M. Bullock; J. Silvertown; B. Clear Hill

1993-01-01

193

BLACK AND BROWN BEAR DENSITY ESTIMATES USING MODIFIED CAPTURE RECAPTURE TECHNIQUES IN ALASKA

Population density estimates were obtained for sympatric black bear (Ursus americanus) and brown bear (U. arctos) populations inhabiting a search area of 1,325 km2 in south-central Alaska. Standard capture-recapture population estimation techniques were modified to correct for lack of geographic closure based on daily locations of radio-marked animals over a 7-day period. Calculated density estimates were based on available habitat

STERLING D. MILLER; EARL F. BECKER; WARREN B. BALLARD

194

On ultrasound image reconstruction by tissue density estimation

An inherent property of medical ultrasound imaging is the speckle noise that generally obscures the image and reduces the diagnostic image resolution and contrast. Consequently, substantial improvement of ultrasound images is an important prerequisite for ultrasound imaging. Some recent research has suggested that the spatial distribution of tissue densities may be used in ultrasound imaging to reconstruct images with fewer

Zheng Huang; Jingxin Zhang; Cishen Zhang

2010-01-01

195

NASA Astrophysics Data System (ADS)

The quality of medical ultrasound images is limited by inherent poor resolution due to the finite temporal bandwidth of the acoustic pulse and the non-negligible width of the system point-spread function. One of the major difficulties in designing a practical and effective restoration algorithm is to develop a model for the tissue reflectivity that can adequately capture significant image features without being computationally prohibitive. The reflectivities of biological tissues do not exhibit the piecewise smooth characteristics of natural images considered in the standard image processing literature; while the macroscopic variations in echogenicity are indeed piecewise smooth, the presence of sub-wavelength scatterers adds a pseudo-random component at the microscopic level. This observation leads us to propose modelling the tissue reflectivity as the product of a piecewise smooth echogenicity map and a unit-variance random field. The chief advantage of such an explicit representation is that it allows us to exploit representations for piecewise smooth functions (such as wavelet bases) in modelling variations in echogenicity without neglecting the microscopic pseudo-random detail. As an example of how this multiplicative model may be exploited, we propose an expectation-maximisation (EM) restoration algorithm that alternates between inverse filtering (to estimate the tissue reflectivity) and logarithmic wavelet denoising (to estimate the echogenicity map). We provide simulation and in vitro results to demonstrate that our proposed algorithm yields solutions that enjoy higher resolution, better contrast and greater fidelity to the tissue reflectivity compared with the current state-of-the-art in ultrasound image restoration.

Ng, J. K. H.; Prager, R. W.; Kingsbury, N. G.; Treece, G. M.; Gee, A. H.

2006-03-01

196

An Evaluation of the Accuracy of Kernel Density Estimators for Home Range Analysis

Abstract. Kernel density estimators are becoming more widely used, particularly as home range estimators. Despite extensive interest in their theoretical properties, little em- pirical research,has been,done,to investigate,their performance,as home,range estimators. We used,computer,simulations,to compare,the area and shape,of kernel density estimates to the true area and shape,of multimodal,two-dimensional,distributions. The fixed kernel gave,area estimates,with very little bias when,least squares,cross validation was,used to select

D. Erran Seaman; Roger A. Powell

2008-01-01

197

L1-consistent estimation of the density of residuals in random design regression

Devroye,1, , Tina Felber2 , Michael Kohler2 and Adam Krzyak3 1 School of Computer Science, Mc for the probability law of a random variable X. We consider the problem of estimating f from the data Dn. Estimating is pointwise and uniformly consistent (Cheng (2004)), and, in addition, the histogram error density estimator

Devroye, Luc

198

Volumetric breast density estimation from full-field digital mammograms

A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear

Saskia Van Engeland; Peter R. Snoeren; Henkjan Huisman; Carla Boetes; Nico Karssemeijer

2006-01-01

199

Asymptotic equivalence of density estimation and Gaussian white noise

Signal recovery in Gaussian white noise with variance tending to zero has served for some time as a representative model for nonparametric curve estimation, having all the essential traits in a pure form. The equivalence has mostly been stated informally, but an approximation in the sense of Le Cam's deficiency distance $\\\\Delta$ would make it precise. The models are then

Michael Nussbaum

1996-01-01

200

Self-consistent method for density estimation Alberto Bernacchia

limit for the scaling of the square error with the dataset size. 1. Introduction Every scientist has, respectively, to parametric and non-parametric estimates. We will focus on the latter approach, although we non-parametric method is simply plotting a histogram, but more sophisticated procedures have been

Bernacchia, Alberto

201

Improving Density Estimation by Incorporating Spatial Information Laura M. Smith Matthew S. Keegan Angeles November 30, 2009 Abstract Given discrete event data, we wish to produce a probability density of density estimation, such as Kernel Density Estimation, do not incorporate geographical information. Using

Soatto, Stefano

202

Density Ratio Estimation: A Comprehensive Review Masashi Sugiyama, Tokyo Institute of Technology Kanamori, Nagoya University (kanamori@is.nagoya-u.ac.jp) Abstract Density ratio estimation has attracted inference, and conditional probability estimation. When estimating the density ratio, it is preferable

Sugiyama, Masashi

203

Daytime fog detection and density estimation with entropy minimization

NASA Astrophysics Data System (ADS)

Fog disturbs the proper image processing in many outdoor observation tools. For instance, fog reduces the visibility of obstacles in vehicle driving applications. Usually, the estimation of the amount of fog in the scene image allows to greatly improve the image processing, and thus to better perform the observation task. One possibility is to restore the visibility of the contrasts in the image from the foggy scene image before applying the usual image processing. Several algorithms were proposed in the recent years for defogging. Before to apply the defogging, it is necessary to detect the presence of fog, not to emphasis the contrasts due to noise. Surprisingly, few a reduced number of image processing algorithms were proposed for fog detection and characterization. Most are dedicated to static cameras and can not be used when the camera is moving. Daytime fog is characterized by its extinction coefficient, which is equivalent to the visibility distance. A visibility-meter can be used for fog detection and characterization, but this kind of sensor performs an estimation in a relatively small volume of air, and is thus sensitive to heterogeneous fog, and air turbulence with moving cameras. In this paper, we propose an original algorithm, based on entropy minimization, to detect fog and estimate its extinction coefficient by the processing of stereo pairs. This algorithm is fast, provides accurate results using low cost stereo camera sensor and, the more important, can work when the cameras are moving. The proposed algorithm is evaluated on synthetic and camera images with ground truth. Results show that the proposed method is accurate, and, combined with a fast stereo reconstruction algorithm, should provide a solution, close to real time, for fog detection and visibility estimation for moving sensors.

Caraffa, L.; Tarel, J. P.

2014-08-01

204

Effect of Tissue Thickness Variation in Volumetric Breast Density Estimation

A method is presented for the estimation of dense breast tissue volume from full-field digital mammograms. The digital signal\\u000a from the image is determined by using a physical signal propagation model that considers the incident x-ray spectrum and detector\\u000a efficiency, assuming that the breast is composed of only adipose and fibroglandular tissue. The effect of an error in the\\u000a breast

Olivier Alonzo-proulx; Albert H. Tyson; Gordon E. Mawdsley; Martin J. Yaffe

2008-01-01

205

Estimated global nitrogen deposition using NO2 column density

Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m?2 yr?1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m?2 yr?1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

2013-01-01

206

The estimation of the gradient of a density function, with applications in pattern recognition

Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in ak-nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality

KEINOSUKE FUKUNAGA; LARRY D. HOSTETLER

1975-01-01

207

Probabilistic Analysis and Density Parameter Estimation Within Nessus

NASA Technical Reports Server (NTRS)

This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

2002-01-01

208

Radiation Pressure Detection and Density Estimate for 2011 MD

NASA Astrophysics Data System (ADS)

We present our astrometric observations of the small near-Earth object 2011 MD (H ~ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5?. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of \\rho = (640+/- 330) kg \\, m ^{-3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

Micheli, Marco; Tholen, David J.; Elliott, Garrett T.

2014-06-01

209

Computerized Medical Imaging and Graphics 31 (2007) 1Â8 Wavelet-based medical image compression that the proposed approach indeed achieves a higher compression rate on CT, MRI and ultrasound images comparing; Medical image; Selection of predictor variables; Adaptive arithmetic coding; Multicollinearity problem 1

Chang, Pao-Chi

210

A new algorithm for wavelet-based heart rate variability analysis

One of the most promising non-invasive markers of the activity of the autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits often provide spectral analysis techniques using the Fourier transform, which assumes that the heart rate series is stationary. To overcome this issue, the Short Time Fourier Transform is often used (STFT). However, the wavelet transform is thought to be a more suitable tool for analyzing non-stationary signals than the STFT. Given the lack of support for wavelet-based analysis in HRV toolkits, such analysis must be implemented by the researcher. This has made this technique underutilized. This paper presents a new algorithm to perform HRV power spectrum analysis based on the Maximal Overlap Discrete Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any spectral band with a given tolerance for the band's boundaries. The MODWPT decomposition tree is pruned to avoid calculating unnecessary wavelet coefficients, thereby optimizing execution t...

García, Constantino A; Vila, Xosé; Márquez, David G

2014-01-01

211

A wavelet-based watermarking algorithm for ownership verification of digital images.

Access to multimedia data has become much easier due to the rapid growth of the Internet. While this is usually considered an improvement of everyday life, it also makes unauthorized copying and distributing of multimedia data much easier, therefore presenting a challenge in the field of copyright protection. Digital watermarking, which is inserting copyright information into the data, has been proposed to solve the problem. In this paper, we first discuss the features that a practical digital watermarking system for ownership verification requires. Besides perceptual invisibility and robustness, we claim that the private control of the watermark is also very important. Second, we present a novel wavelet-based watermarking algorithm. Experimental results and analysis are then given to demonstrate that the proposed algorithm is effective and can be used in a practical system. PMID:18244614

Wang, Yiwei; Doherty, John F; Van Dyck, Robert E

2002-01-01

212

We extract the informative features of gyroscope signals using the discrete wavelet transform (DWT) decomposition and provide them as input to multi-layer feed-forward artificial neural networks (ANNs) for leg motion classification. Since the DWT is based on correlating the analyzed signal with a prototype wavelet function, selection of the wavelet type can influence the performance of wavelet-based applications significantly. We also investigate the effect of selecting different wavelet families on classification accuracy and ANN complexity and provide a comparison between them. The maximum classification accuracy of 97.7% is achieved with the Daubechies wavelet of order 16 and the reverse bi-orthogonal (RBO) wavelet of order 3.1, both with similar ANN complexity. However, the RBO 3.1 wavelet is preferable because of its lower computational complexity in the DWT decomposition and reconstruction. PMID:22319378

Ayrulu-Erdem, Birsel; Barshan, Billur

2011-01-01

213

Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis

To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

2014-01-01

214

An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

1998-11-01

215

Corrosion in reinforced concrete panels: wireless monitoring and wavelet-based analysis.

To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

2014-01-01

216

Bayesian Analysis of Mass Spectrometry Proteomics Data using Wavelet Based Functional Mixed Models

In this paper, we analyze MALDI-TOF mass spectrometry proteomic data using Bayesian wavelet-based functional mixed models. By modeling mass spectra as functions, this approach avoids reliance on peak detection methods. The flexibility of this framework in modeling non-parametric fixed and random effect functions enables it to model the effects of multiple factors simultaneously, allowing one to perform inference on multiple factors of interest using the same model fit, while adjusting for clinical or experimental covariates that may affect both the intensities and locations of peaks in the spectra. From the model output, we identify spectral regions that are differentially expressed across experimental conditions, while controlling the Bayesian FDR, in a way that takes both statistical and clinical significance into account. We apply this method to two cancer studies. PMID:17888041

Morris, Jeffrey S.; Brown, Philip J.; Herrick, Richard C.; Baggerly, Keith A.; Coombes, Kevin R.

2008-01-01

217

Wavelet bases on the interval with short support and vanishing moments

NASA Astrophysics Data System (ADS)

Jia and Zhao have recently proposed a construction of a cubic spline wavelet basis on the interval which satisfies homogeneous Dirichlet boundary conditions of the second order. They used the basis for solving fourth order problems and they showed that Galerkin method with this basis has superb convergence. The stiffness matrices for the biharmonic equation defined on a unit square have very small and uniformly bounded condition numbers. In our contribution, we design wavelet bases with the same scaling functions and different wavelets. We show that our basis has the same quantitative properties as the wavelet basis constructed by Jia and Zhao and additionally the wavelets have vanishing moments. It enables to use this wavelet basis in adaptive wavelet methods and non-adaptive sparse grid methods. Furthermore, we even improve the condition numbers of the stiffness matrices by including lower levels.

Bímová, Daniela; ?erná, Dana; Fin?k, Václav

2012-11-01

218

A comparison of 2 techniques for estimating deer density

We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

Storm, G.L.; Cottam, D.F.; Yahner, R.H.; Nichols, J.D.

1977-01-01

219

Density Estimation with Confidence Sets Exemplified by Superclusters and Voids in the Galaxies

A method is presented for forming both a point estimate and a confidence set of semiparametric densities. The final product is a three-dimensional figure that displays a selection of density estimates for a plausible range of smoothing parameters. The boundaries of the smoothing parameter are determined by a nonparametric goodness-of-fit test that is based on the sample spacings. For each

Kathryn Roeder

1990-01-01

220

NASA Astrophysics Data System (ADS)

We introduce and prove local Wegner estimates for continuous generalized Anderson Hamiltonians, where the single-site random variables are independent but not necessarily identically distributed. In particular, we get Wegner estimates with a constant that goes to zero as we approach the bottom of the spectrum. As an application, we show that the (differentiated) density of states exhibits the same Lifshitz tails upper bound as the integrated density of states.

Combes, Jean-Michel; Germinet, François; Klein, Abel

2014-08-01

221

Density meter algorithm and system for estimating sampling/mixing uncertainty

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

Shine, E P

1986-01-01

222

Estimating low-density snowshoe hare populations using fecal pellet counts

distribution. RÃ©sumÃ© : On peut Ã©valuer les populations de liÃ¨vres d'AmÃ©rique (Lepus americanus) de forte D. Roth, Ethan Ellsworth, Aaron J. Wirsing, and Todd D. Steury Abstract: Snowshoe hare (Lepus americanus) populations found at high densities can be estimated using fecal pellet densities on rectangular

223

Estimation of Vibrational Frequencies and Vibrational Densities of States in Isotopically for obtaining the unknown vibration frequencies of the many asymmetric isotopomers of a molecule from those In a number of recent calculations, we have required the vibrational densities of states of isotopically

Hathorn, Bryan C.

224

The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition

The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition Thomas K scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb). These proximate data were subsequently converted to energy densities. The two techniques (bomb- calorimetry

Hays, Graeme

225

Recursive Estimation of the Preisach Density function for a Smart Actuator

Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Department the approximate density function. These methods can be implemented in real-time controllers for smart actuators in ferromagnetism or electric field intensity E and polarization P in ferroelectricity, is described by a hysteresis

Iyer, Ram Venkataraman

226

The role of multivariate skew-Student density in the estimation of stock market crashes

By combining the multivariate skew-Student density with a time-varying correlation GARCH (TVC-GARCH) model, this paper investigates the spread of crashes in the regional stock markets. The regional index series of European, USA, Latin American and Asian markets are modeled jointly, and the maximum likelihood estimates show that a TVC-GARCH model with multivariate skew-Student density outperforms that with multivariate normal density

Lei Wu; Qingbin Meng; Julio C. Velazquez

2012-01-01

227

Technology Transfer Automated Retrieval System (TEKTRAN)

Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

228

Item Response Theory with Estimation of the Latent Density Using Davidian Curves

ERIC Educational Resources Information Center

Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

Woods, Carol M.; Lin, Nan

2009-01-01

229

A bound for the smoothing parameter in certain well-known nonparametric density estimators

NASA Technical Reports Server (NTRS)

Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

Terrell, G. R.

1980-01-01

230

Nonparametric maximum likelihood estimation of probability densities by penalty function methods

NASA Technical Reports Server (NTRS)

When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

1974-01-01

231

How Bandwidth Selection Algorithms Impact Exploratory Data Analysis Using Kernel Density Estimation

selection: Classical or plug-in? The Annals of Statistics, 27(2), 415–438. Marmolejo-Ramos, F. & Matsunaga, M. (2009). Getting the most from your curves: Exploring and reporting data using informative graphical techniques. Tutorials in Quantitative Methods... will be estimated. Let f (x) be the true probability density function (PDF) and fˆ (x;h) be the estimated PDF. The kernel density estimate of f (x) is fˆ (x;h) = n?1h?1 n ? i=1 K ( x?Xi h ) , (1.1) 3 where K is the kernel function that satisfies ´ K(y)dy= 1, ´ y...

Harpole, Jared Kenneth

2013-05-31

232

Accurate forecasting of pest density is essential for effective pest management. In this study, a simple image processing system that automatically estimated the density of whiteflies on sticky traps was developed. The estimated densities of samples in a laboratory and a greenhouse were in accordance with the actual values. The detection system was especially efficient when the whitefly densities were

Mu Qiao; Jaehong Lim; Chang Woo Ji; Bu-Keun Chung; Hwang-Yong Kim; Ki-Baik Uhm; Cheol Soo Myung; Jongman Cho; Tae-Soo Chon

2008-01-01

233

NASA Astrophysics Data System (ADS)

A temporal-spatial filtering algorithm based on kernel density estimation structure is presented for background suppression in this paper. The algorithm can be divided into spatial filtering and temporal filtering. Smoothing process is applied to the background of an infrared image sequence by using the kernel density estimation algorithm in spatial filtering. The probability density of the image gray values after spatial filtering is calculated with the kernel density estimation algorithm in temporal filtering. The background residual and blind pixels are picked out based on their gray values, and are further filtered. The algorithm is validated with a real infrared image sequence. The image sequence is processed by using Fuller kernel filter, Uniform kernel filter and high-pass filter. Quantitatively analysis shows that the temporal-spatial filtering algorithm based on the nonparametric method is a satisfactory way to suppress background clutter in infrared images. The SNR is significantly improved as well.

Tian, Yuexin; Liu, Yinghui; Gao, Kun; Shu, Yuwen; Ni, Guoqiang

2014-11-01

234

Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities with the bed heights. PMID:24804024

Haque, Ekramul

2013-03-01

235

Estimation of tiger densities in India using photographic captures and recaptures

Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

Karanth, U.; Nichols, J.D.

1998-01-01

236

Estimating detection and density of the Andean cat in the high Andes

The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

2011-01-01

237

Estimating detection and density of the Andean cat in the high Andes

The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

Reppucci, J.; Gardner, B.; Lucherini, M.

2011-01-01

238

Volumetric Breast Density Estimation from Full-Field Digital Mammograms: A Validation Study

Objectives To objectively evaluate automatic volumetric breast density assessment in Full-Field Digital Mammograms (FFDM) using measurements obtained from breast Magnetic Resonance Imaging (MRI). Material and Methods A commercially available method for volumetric breast density estimation on FFDM is evaluated by comparing volume estimates obtained from 186 FFDM exams including mediolateral oblique (MLO) and cranial-caudal (CC) views to objective reference standard measurements obtained from MRI. Results Volumetric measurements obtained from FFDM show high correlation with MRI data. Pearson’s correlation coefficients of 0.93, 0.97 and 0.85 were obtained for volumetric breast density, breast volume and fibroglandular tissue volume, respectively. Conclusions Accurate volumetric breast density assessment is feasible in Full-Field Digital Mammograms and has potential to be used in objective breast cancer risk models and personalized screening. PMID:24465808

Gubern-Mérida, Albert; Kallenberg, Michiel; Platel, Bram; Mann, Ritse M.; Martí, Robert; Karssemeijer, Nico

2014-01-01

239

Fast and accurate probability density estimation in large high dimensional astronomical datasets

NASA Astrophysics Data System (ADS)

Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

2015-01-01

240

A Nested Kernel Density Estimator for Improved Characterization of Precipitation Extremes

NASA Astrophysics Data System (ADS)

The number and intensity of short-term precipitation extremes has recently been a topic of much interest, with record-setting events occurring in the United States, Europe, Asia, and Australia. These events show the importance of characterizing the behavior of short-term (daily and sub-daily) precipitation intensity so as to properly understand and predict the occurrence and magnitude of extreme precipitation events. One such characterization method is the use of kernel density estimators, which avoid parametric assumptions, and can therefore uncover complex properties such as multimodality. State-of-the-art kernel density estimators have two major recognized drawbacks, however. The first is that kernel density estimators that use unbounded kernels cannot enforce the fact that precipitation is strictly non-negative, because they are subject to ';probability leakage' at the boundary. The second is that they tend to produce artificially spurious fluctuations in the tail of the distribution. To resolve these problems, we present here a nested transformation kernel density estimator, consisting of one or two transformation steps. The first step corrects the skewness of the precipitation distribution, which is the dominant distributional feature of short-term precipitation. Depending on the complexity of the transformed data, the next step is to determine whether further correction is needed. If indeed, an additional skewness correction or a kurtosis correction is implemented, depending on which of these is the dominant remaining feature. The conventional kernel density estimator is used to estimate the density of the transformed data, which is then back transformed into the original space. We evaluate this method using daily precipitation records from 1,217 stations across the continental United States, and compare its performance with other commonly used nonparametric and parametric methods. The presented method represents an improvement over existing ones in more accurately characterizing the behavior of precipitation extremes without strict parametric assumptions, while also being computationally tractable for large datasets.

Li, C.; Michalak, A. M.

2013-12-01

241

Trap array configuration influences estimates and precision of black bear density and abundance.

Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L

2014-01-01

242

Turbulence motions are, by nature, three-dimensional while planar imaging techniques, widely used in turbulent combustion,\\u000a give only access to two-dimensional information. For example, to extract flame surface densities, a key ingredient of some\\u000a turbulent combustion models, from planar images implicitly assumes an instantaneously two-dimensional flow, neglecting the\\u000a unresolved flame front wrinkling. The objective here is to estimate flame surface densities

Denis Veynante; Guido Lodato; Pascale Domingo; Luc Vervisch; Evatt R. Hawkes

2010-01-01

243

Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

2014-01-01

244

The accurate quantitation of high density lipo- proteins has recently assumed greater importance in view of studies suggesting their negative correlation with coronary heart disease. High density lipoproteins may be estimated by measuring cholesterol in the plasma frac- tion of d > 1.063 g\\/ml. A more practical approach is the specific precipitation of apolipoprotein B (apoB)-contain- ing lipoproteins by sulfated

G. Russell Warnick; John J. Albers

245

Gas\\/Liquid Two-Phase Flow Regime Recognition Based on Adaptive Wavelet-Based Neural Network

Flow regime recognition of two-phase flow is of great importance in industrial process. In this paper, a new method is brought forward to recognize the gas\\/liquid two-phase flow regime. The information of the method that provided by electrical resistance tomography (ERT) is the measured data in horizontal pipe. A new adaptive wavelet-based neural network was introduced and it combines the

Jun Han; Feng Dong; Yaoyuan Xu

2008-01-01

246

Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250

Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès

2014-06-01

247

Mid-latitude Ionospheric Storms Density Gradients, Winds, and Drifts Estimated from GPS TEC Imaging

NASA Astrophysics Data System (ADS)

Ionospheric storm processes at mid-latitudes stand in stark contrast to the typical quiescent behavior. Storm enhanced density (SED) on the dayside affects continent-sized regions horizontally and are often associated with a plume that extends poleward and upward into the nightside. One proposed cause of this behavior is the sub-auroral polarization stream (SAPS) acting on the SED, and neutral wind effects. The electric field and its effect connecting mid-latitude and polar regions are just beginning to be understood and modeled. Another possible coupling effect is due to neutral winds, particularly those generated at high latitudes by joule heating effects. Of particular interest are electric fields and winds along the boundaries of the SED and plume, because these may be at least partly a cause of sharp horizontal electron density gradients. Thus, it is important to understand what bearing the drifts and winds, and any spatial variations in them (e.g., shear), have on the structure of the enhancement, particularly at its boundaries. Imaging techniques based on GPS TEC play a significant role in study of mid-latitude storm dynamics, particularly at mid-latitudes, where sampling of the ionosphere with ground-based GPS lines of sight is most dense. Ionospheric Data Assimilation 4-Dimensional (IDA4D) is a plasma density estimation algorithm that has been used in a number of scientific investigations over several years. Recently, efforts to estimate drivers of the mid-latitude ionosphere, focusing on electric-field-induced drifts and neutral winds, based on GPS TEC high-resolution imaging have shown promise. Estimating Ionospheric Parameters from Ionospheric Reverse Engineering (EMPIRE) is a tool developed that addresses this kind of investigation. In this work electron density and driver estimates are presented for an ionospheric storm using IDA4D in conjunction with EMPIRE. The IDA4D estimates resolve F-region electron densities at 1-degree resolution at the region of passage of the SED and associated plume. High-resolution imaging is used in conjunction with EMPIRE to deduce the dominant drivers. Starting with a baseline Weimer 2001 electric potential model, adjustments to the Weimer model are estimated for the given storm based on the IDA4D-derived densities to show electric fields associated with the plume. These regional densities and drivers are compared to CHAMP and DMSP data that are proximal for validation. Gradients in electron density are numerically computed over the 1-degree region. These density gradients are correlated with the drift estimates to identify a possible causal relationship in the formation of the boundaries of the SED.

Datta-Barua, S.; Bust, G. S.

2012-12-01

248

Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

NASA Astrophysics Data System (ADS)

The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

249

Revisiting multifractality of high resolution temporal rainfall using a wavelet-based formalism

NASA Astrophysics Data System (ADS)

We re-examine the scaling structure of temporal rainfall using wavelet-based methodologies which offer important advantages compared to the more traditional multifractal approaches such as box counting and structure function techniques. In particular, we explore two methods based on the Continuous Wavelet Transform (CWT) and the Wavelet Transform Modulus Maxima (WTMM): the partition function method and the newer and more efficient magnitude cumulant analysis method. We also explore a two-point magnitude correlation analysis which is able to infer the presence or absence of multiplicativity as the underlying mechanism of scaling. The diagnostic power of these methodologies for small samples, signals with short ranges of scaling, and signals for which high frequency fluctuations are superimposed on a low-frequency component (all common attributes of geophysical signals) is carefully documented. Application of these methodologies to several midwestern convective storms sampled every 5 seconds over several hours provide new insights. They reveal the presence of a very intermittent multifractal structure (a wide spectrum of singularities) in rainfall fluctuations between the scales of 5 minutes and the storm pulse duration of 1-2 hours. The two-point magnitude statistical analysis suggests that this structure is associated with a local multiplicative cascading mechanism which applies only within storm pulses but not over the whole storm duration.

Foufoula-Georgiou, E.; Venugopal, V.; Roux, S. G.; Arneodo, A.

2005-12-01

250

Revisiting multifractality of high-resolution temporal rainfall using a wavelet-based formalism

NASA Astrophysics Data System (ADS)

We reexamine the scaling structure of temporal rainfall using wavelet-based methodologies which, as we demonstrate, offer important advantages compared to the more traditional multifractal approaches such as box counting and structure function techniques. In particular, we explore two methods based on the Continuous Wavelet Transform (CWT) and the Wavelet Transform Modulus Maxima (WTMM): the partition function method and the newer and more efficient magnitude cumulant analysis method. We also report the results of a two-point magnitude correlation analysis which is able to infer the presence or absence of multiplicativity as the underlying mechanism of scaling. The diagnostic power of these methodologies for small samples, signals with short ranges of scaling, and signals for which high-frequency fluctuations are superimposed on a low-frequency component (all common attributes of geophysical signals) is carefully documented. Application of these methodologies to several midwestern convective storms sampled every 5 s over several hours provides new insights. They reveal the presence of a very intermittent multifractal structure (a wide spectrum of singularities) in rainfall fluctuations between the scales of 5 min and the storm pulse duration (of the order of 1-2 hours for the analyzed storms). The two-point magnitude statistical analysis suggests that this structure is consistent with a multiplicative cascading mechanism which however is local in nature; that is, it applies only within each storm pulse but not over the whole storm duration.

Venugopal, V.; Roux, StéPhane G.; Foufoula-Georgiou, Efi; Arneodo, Alain

2006-06-01

251

A new approach to pre-processing digital image for wavelet-based watermark

NASA Astrophysics Data System (ADS)

The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

Agreste, Santa; Andaloro, Guido

2008-11-01

252

The FlexWave-ll: a wavelet-based compression engine

NASA Astrophysics Data System (ADS)

The FlexWave-II has been developed as a dedicated image compression component for spaceborn applica- tions, enabling a multitude of application scenarios, including lossless and lossy compression. The Flex-Wave-II provides scalable compression, allowing gradual enhancement or degradation of the image quality in a programmable way. A wavelet-based compression scheme has been selected because of the intrinsic scalable characteristics. Moreover the compression criteria can be tuned separately for optimal measurement and visual data compression. The FlexWave-II provides full scalability features and high processing performance. It supports push-broom image processing. The wavelet transform engine is ca- pable of computing up to 5 levels of wavelet transform with 5/3-, 9/3- or 9/7-tap wavelet filters, for image sizes as large as 1k×1k pixels. On an FPGA implementation, clocked on 41 MHz, a processing performance of up to 10 Mpixels/second was measured. The wavelet com- pression engine allows two compression modes: a fixed compression ratio mode optimised for user-defined cri- teria and a fixed quantisation mode with user defined quantisation tables.

Vanhoof, B.; Chirila-Rus, A.; Masschelein, B.; Osorio, R.

2002-12-01

253

Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044

Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan

2014-05-01

254

A study on discrete wavelet-based noise removal from EEG signals.

Electroencephalogram (EEG) serves as an extremely valuable tool for clinicians and researchers to study the activity of the brain in a non-invasive manner. It has long been used for the diagnosis of various central nervous system disorders like seizures, epilepsy, and brain damage and for categorizing sleep stages in patients. The artifacts caused by various factors such as Electrooculogram (EOG), eye blink, and Electromyogram (EMG) in EEG signal increases the difficulty in analyzing them. Discrete wavelet transform has been applied in this research for removing noise from the EEG signal. The effectiveness of the noise removal is quantitatively measured using Root Mean Square (RMS) Difference. This paper reports on the effectiveness of wavelet transform applied to the EEG signal as a means of removing noise to retrieve important information related to both healthy and epileptic patients. Wavelet-based noise removal on the EEG signal of both healthy and epileptic subjects was performed using four discrete wavelet functions. With the appropriate choice of the wavelet function (WF), it is possible to remove noise effectively to analyze EEG significantly. Result of this study shows that WF Daubechies 8 (db8) provides the best noise removal from the raw EEG signal of healthy patients, while WF orthogonal Meyer does the same for epileptic patients. This algorithm is intended for FPGA implementation of portable biomedical equipments to detect different brain state in different circumstances. PMID:20865544

Asaduzzaman, K; Reaz, M B I; Mohd-Yasin, F; Sim, K S; Hussain, M S

2010-01-01

255

Wavelet-based double-difference seismic tomography with sparsity regularization

NASA Astrophysics Data System (ADS)

We have developed a wavelet-based double-difference (DD) seismic tomography method. Instead of solving for the velocity model itself, the new method inverts for its wavelet coefficients in the wavelet domain. This method takes advantage of the multiscale property of the wavelet representation and solves the model at different scales. A sparsity constraint is applied to the inversion system to make the set of wavelet coefficients of the velocity model sparse. This considers the fact that the background velocity variation is generally smooth and the inversion proceeds in a multiscale way with larger scale features resolved first and finer scale features resolved later, which naturally leads to the sparsity of the wavelet coefficients of the model. The method is both data- and model-adaptive because wavelet coefficients are non-zero in the regions where the model changes abruptly when they are well sampled by ray paths and the model is resolved from coarser to finer scales. An iteratively reweighted least squares procedure is adopted to solve the inversion system with the sparsity regularization. A synthetic test for an idealized fault zone model shows that the new method can better resolve the discontinuous boundaries of the fault zone and the velocity values are also better recovered compared to the original DD tomography method that uses the first-order Tikhonov regularization.

Fang, Hongjian; Zhang, Haijiang

2014-11-01

256

A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

NASA Astrophysics Data System (ADS)

Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

Kitayama, Masashi

257

Performance evaluation of wavelet-based face verification on a PDA recorded database

NASA Astrophysics Data System (ADS)

The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

Sellahewa, Harin; Jassim, Sabah A.

2006-05-01

258

Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

2014-01-01

259

With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as Point-of-Care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by Body Sensor Networks (BSNs) from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level etc. and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data is being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the Percentage Residual Difference (PRD) and the Wavelet Weighted PRD (WWPRD). It is found that the proposed technique provides high security protection for patients data with low (less than 1% ) distortion and ECG data remains diagnosable after watermarking (i.e. hiding patient confidential data) and as well as after watermarks (i.e. hidden data) are removed from the watermarked data. PMID:23708767

Ibaida, Ayman; Khalil, Ibrahim

2013-05-21

260

Matrix-free application of Hamiltonian operators in Coifman wavelet bases.

A means of evaluating the action of Hamiltonian operators on functions expanded in orthogonal compact support wavelet bases is developed, avoiding the direct construction and storage of operator matrices that complicate extension to coupled multidimensional quantum applications. Application of a potential energy operator is accomplished by simple multiplication of the two sets of expansion coefficients without any convolution. The errors of this coefficient product approximation are quantified and lead to use of particular generalized coiflet bases, derived here, that maximize the number of moment conditions satisfied by the scaling function. This is at the expense of the number of vanishing moments of the wavelet function (approximation order), which appears to be a disadvantage but is shown surmountable. In particular, application of the kinetic energy operator, which is accomplished through the use of one-dimensional (1D) [or at most two-dimensional (2D)] differentiation filters, then degrades in accuracy if the standard choice is made. However, it is determined that use of high-order finite-difference filters yields strongly reduced absolute errors. Eigensolvers that ordinarily use only matrix-vector multiplications, such as the Lanczos algorithm, can then be used with this more efficient procedure. Applications are made to anharmonic vibrational problems: a 1D Morse oscillator, a 2D model of proton transfer, and three-dimensional vibrations of nitrosyl chloride on a global potential energy surface. PMID:20590186

Acevedo, Ramiro; Lombardini, Richard; Johnson, Bruce R

2010-06-28

261

Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

2014-01-01

262

Wavelet-based decomposition and analysis of structural patterns in astronomical images

NASA Astrophysics Data System (ADS)

Context. Images of spatially resolved astrophysical objects contain a wealth of morphological and dynamical information, and effectively extracting this information is of paramount importance for understanding the physics and evolution of these objects. The algorithms and methods currently employed for this purpose (such as Gaussian model fitting) often use simplified approaches to describe the structure of resolved objects. Aims: Automated (unsupervised) methods for structure decomposition and tracking of structural patterns are needed for this purpose to be able to treat the complexity of structure and large amounts of data involved. Methods: We developed a new wavelet-based image segmentation and evaluation (WISE) method for multiscale decomposition, segmentation, and tracking of structural patterns in astronomical images. Results: The method was tested against simulated images of relativistic jets and applied to data from long-term monitoring of parsec-scale radio jets in 3C 273 and 3C 120. Working at its coarsest resolution, WISE reproduces the previous results of a model-fitting evaluation of the structure and kinematics in these jets exceptionally well. Extending the WISE structure analysis to fine scales provides the first robust measurements of two-dimensional velocity fields in these jets and indicates that the velocity fields probably reflect the evolution of Kelvin-Helmholtz instabilities that develop in the flow.

Mertens, Florent; Lobanov, Andrei

2015-02-01

263

NASA Astrophysics Data System (ADS)

Electrical Impedance Tomography is a soft-field tomography modality, where image reconstruction is formulated as a non-linear least-squares model fitting problem. The Newton-Rahson scheme is used for actually reconstructing the image, and this involves three main steps: forward solving, computation of the Jacobian, and the computation of the conductivity update. Forward solving relies typically on the finite element method, resulting in the solution of a sparse linear system. In typical three dimensional biomedical applications of EIT, like breast, prostate, or brain imaging, it is desirable to work with sufficiently fine meshes in order to properly capture the shape of the domain, of the electrodes, and to describe the resulting electric filed with accuracy. These requirements result in meshes with 100,000 nodes or more. The solution the resulting forward problems is computationally intensive. We address this aspect by speeding up the solution of the FEM linear system by the use of efficient numeric methods and of new hardware architectures. In particular, in terms of numeric methods, we solve the forward problem using the Conjugate Gradient method, with a wavelet-based algebraic multigrid (AMG) preconditioner. This preconditioner is faster to set up than other AMG preconditoiners which are not based on wavelets, it does use less memory, and provides for a faster convergence. We report results for a MATLAB based prototype algorithm an we discuss details of a work in progress for a GPU implementation.

Borsic, A.; Bayford, R.

2010-04-01

264

Adaptive Audio Watermarking via the Optimization Point of View on the Wavelet-Based Entropy

This study aims to present an adaptive audio watermarking method using ideas of wavelet-based entropy (WBE). The method converts low-frequency coefficients of discrete wavelet transform (DWT) into the WBE domain, followed by the calculations of mean values of each audio as well as derivation of some essential properties of WBE. A characteristic curve relating the WBE and DWT coefficients is also presented. The foundation of the embedding process lies on the approximately invariant property demonstrated from the mean of each audio and the characteristic curve. Besides, the quality of the watermarked audio is optimized. In the detecting process, the watermark can be extracted using only values of the WBE. Finally, the performance of the proposed watermarking method is analyzed in terms of signal to noise ratio, mean opinion score and robustness. Experimental results confirm that the embedded data are robust to resist the common attacks like re-sampling, MP3 compression, low-pass filtering, and amplitude-scaling

Chen, Shuo-Tsung; Chen, Chur-Jen

2011-01-01

265

The Analysis of Surface EMG Signals with the Wavelet-Based Correlation Dimension Method

Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240

Zhang, Yanyan; Wang, Jue

2014-01-01

266

Hierarchical models for estimating density from DNA mark-recapture studies.

Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps (e.g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS. PMID:19449704

Gardner, Beth; Royle, J Andrew; Wegan, Michael T

2009-04-01

267

A hierarchical model for estimating density in camera-trap studies

1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

Royle, J.A.; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.

2009-01-01

268

Hierarchical models for estimating density from DNA mark-recapture studies

Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

Gardner, B.; Royle, J.A.; Wegan, M.T.

2009-01-01

269

A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder

NASA Astrophysics Data System (ADS)

Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.

Schroth-Miller, Madeline L.

270

Estimation of densities of probability and regression surfaces in one or two dimensions

The FORTRAN programs presented make it possible to build curves and surfaces of densities for the Lebesgue-measure, when one has a sample of n independent observations of a random variable in one or two dimensions and when this number n can be high (many thousands). The method uses kernel-estimators with varying window-parameters estimated via a modified maximum likelihood procedure. In

Christian Duhamel; Gérard Parlant

1984-01-01

271

Workplace air is monitored for overall dust levels and for specific components of the dust to determine compliance with occupational and workplace standards established by regulatory bodies for worker health protection. Exposure monitoring studies were conducted by the International Copper Association (ICA) at various industrial facilities around the world working with copper. Individual cascade impactor stages were weighed to determine the total amount of dust collected on the stage, and then the amounts of soluble and insoluble copper and other metals on each stage were determined; speciation was not determined. Filter samples were also collected for scanning electron microscope analysis. Retrospectively, there was an interest in obtaining estimates of alveolar lung burdens of copper in workers engaged in tasks requiring different levels of exertion as reflected by their minute ventilation. However, mechanistic lung dosimetry models estimate alveolar lung burdens based on particle Stoke's diameter. In order to use these dosimetry models the mass-based, aerodynamic diameter distribution (which was measured) had to be transformed into a distribution of Stoke's diameters, requiring an estimation be made of individual particle density. This density value was estimated by using cascade impactor data together with scanning electron microscopy data from filter samples. The developed method was applied to ICA monitoring data sets and then the multiple path particle dosimetry (MPPD) model was used to determine the copper alveolar lung burdens for workers with different functional residual capacities engaged in activities requiring a range of minute ventilation levels. PMID:24304308

Miller, Frederick J; Kaczmar, Swiatoslav W; Danzeisen, Ruth; Moss, Owen R

2013-12-01

272

Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.

We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

2014-01-01

273

Estimation of Density-Dependent Mortality of Juvenile Bivalves in the Wadden Sea

We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

2014-01-01

274

An automatic iris occlusion estimation method based on high-dimensional density estimation.

Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651

Li, Yung-Hui; Savvides, Marios

2013-04-01

275

Estimation of densities of probability and regression surfaces in one or two dimensions

NASA Astrophysics Data System (ADS)

The FORTRAN programs presented make it possible to build curves and surfaces of densities for the Lebesgue-measure, when one has a sample of n independent observations of a random variable in one or two dimensions and when this number n can be high (many thousands). The method uses kernel-estimators with varying window-parameters estimated via a modified maximum likelihood procedure. In the case of a three-dimensional variable, it is possible to estimate the function: ( x, y) ? E( Z/ X> = x, Y = y) of conditional expectation. Studies based on simulations as well as on real data are pres ented.

Duhamel, Christian; Parlant, Gérard

1984-04-01

276

3D depth-to-basement and density contrast estimates using gravity and borehole data

NASA Astrophysics Data System (ADS)

We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.

Barbosa, V. C.; Martins, C. M.; Silva, J. B.

2009-05-01

277

Functional Volumes Modeling using Kernel Density Estimation F. A. Nielsen and L. K. Hansen

Functional Volumes Modeling using Kernel Density Estimation F. Å¡ A. Nielsen and L. K. Hansen Dept(5187):994--996. 4. Nielsen, F. Å¡ A., Hansen, L. K., NeuroImage, vol. 7, 1998, S782. 5. Rasmussen, C. E., Advances

Nielsen, Finn Ã?rup

278

Estimation method for electron-hole pair density in plasma columns

A simple method to estimate the electron-hole pair density in plasma columns created by heavy ions is described. The volume of plasma column is described in a ratio to the one of cone with the same range and bottom radius of the plasma column. The volume ratio is expressed by second polynomials of the energy per unit mass of the

I. Kanno

1999-01-01

279

Efficient adaptive density estimation per image pixel for the task of background subtraction

We analyze the computer vision task of pixel-level background subtraction. We present recursive equations that are used to constantly update the parameters of a Gaussian mixture model and to simultaneously select the appropriate number of components for each pixel. We also present a simple non-parametric adaptive density estimation method. The two methods are compared with each other and with some

Zoran Zivkovic; Ferdinand Van Der Heijden

2006-01-01

280

Technology Transfer Automated Retrieval System (TEKTRAN)

Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...

281

A FULLY AUTOMATED SCHEME FOR BREAST DENSITY ESTIMATION AND ASYMMETRY DETECTION OF MAMMOGRAMS

This paper presents a fully automated scheme for breast den- sity estimation and asymmetry detection on mammographic images. Image preprocessing and segmentation techniques are first applied to the image, in order to extract the feature s for the breast density categorization. Also a new fractal- related feature is proposed for the classification. The clas - sification to 3 classes is

Stylianos Tzikopoulos; Sergios Theodoridis

2009-01-01

282

Analog circuit fault diagnosis based on fuzzy support vector machine and kernel density estimation

Because analog circuits such as abnormal noise contained in the information, to the support vector machine to build up the optimal classification brings difficulties, this paper proposes a new method for analog circuit fault diagnosis. First of all, time-domain signal extraction circuit statistical parameters, a set of fault characteristics and then use kernel density estimation method, proposed a form of

Jing Tang; Yun'an Hu; Tao Lin; Yu Chen

2010-01-01

283

In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to

Balsa Terzic; Gabriele Bassi

2011-01-01

284

Estimation of the High-Latitude Topside Heat Flux Using DMSP In Situ Plasma Densities

The high-latitude ionosphere interfaces with the hot, tenuous, magnetospheric plasma, and a heat flow into the ionosphere is expected, which has a large impact on the plasma densities and temperatures in the high-latitude ionosphere. The value of this magnetospheric heat flux is unknown. In an effort to estimate the value of the magnetospheric heat flux into the ionosphere and, and

H. Bekerat; R. Schunk; L. Scherliess

2005-01-01

285

Cetacean population density estimation from single fixed sensors using passive acoustics

. Acoustic recordings of marine mammal vocalizations, including echolocation clicks, calls, and songsCetacean population density estimation from single fixed sensors using passive acoustics Elizabeth T. KuÂ¨sela) and David K. Mellinger Cooperative Institute for Marine Resources Studies (CIMRS

Thomas, Len

286

How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PMID:24885339

Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

2014-09-01

287

A wind energy analysis of Grenada: an estimation using the ‘Weibull’ density function

The Weibull density function has been used to estimate the wind energy potential in Grenada, West Indies. Based on historic recordings of mean hourly wind velocity this analysis shows the importance to incorporate the variation in wind energy potential during diurnal cycles. Wind energy assessments that are based on Weibull distribution using average daily\\/seasonal wind speeds fail to acknowledge that

D Weisser

2003-01-01

288

Estimating the effect of Earth elasticity and variable water density on tsunami speeds

Estimating the effect of Earth elasticity and variable water density on tsunami speeds Victor C; revised 25 December 2012; accepted 7 January 2013; published 13 February 2013. [1] The speed of tsunami comparisons of tsunami arrival times from the 11 March 2011 tsunami suggest, however, that the standard

Tsai, Victor C.

289

2 Audubon Mississippi, 1208 Washington Street, Vicksburg, MS 39183 Abstract. We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities

Scott G. Somershoe; Daniel J. Twedt; Bruce Reid

2006-01-01

290

Mixture Kalman Filter Based Highway Congestion Mode and Vehicle Density Estimator and its In today's metropolitan areas, highway traffic conges- tion occurs regularly during rush hours. In addition causes inefficient operation of highways, waste of resources, increased air pollution, and intensified

Horowitz, Roberto

291

Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

Robel, G.L.; Fisher, W.L.

1999-01-01

292

A likelihood approach to estimating animal density from binary acoustic transects.

We propose an approximate maximum likelihood method for estimating animal density and abundance from binary passive acoustic transects, when both the probability of detection and the range of detection are unknown. The transect survey is purposely designed so that successive data points are dependent, and this dependence is exploited to simultaneously estimate density, range of detection, and probability of detection. The data are assumed to follow a homogeneous Poisson process in space, and a second-order Markov approximation to the likelihood is used. Simulations show that this method has small bias under the assumptions used to derive the likelihood, although it performs better when the probability of detection is close to 1. The effects of violations of these assumptions are also investigated, and the approach is found to be sensitive to spatial trends in density and clustering. The method is illustrated using real acoustic data from a survey of sperm and humpback whales. PMID:21039393

Horrocks, Julie; Hamilton, David C; Whitehead, Hal

2011-09-01

293

NASA Astrophysics Data System (ADS)

In this work, we investigate the statistical computation of the Boltzmann entropy of statistical samples. For this purpose, we use both histogram and kernel function to estimate the probability density function of statistical samples. We find that, due to coarse-graining, the entropy is a monotonic increasing function of the bin width for histogram or bandwidth for kernel estimation, which seems to be difficult to select an optimal bin width/bandwidth for computing the entropy. Fortunately, we notice that there exists a minimum of the first derivative of entropy for both histogram and kernel estimation, and this minimum point of the first derivative asymptotically points to the optimal bin width or bandwidth. We have verified these findings by large amounts of numerical experiments. Hence, we suggest that the minimum of the first derivative of entropy be used as a selector for the optimal bin width or bandwidth of density estimation. Moreover, the optimal bandwidth selected by the minimum of the first derivative of entropy is purely data-based, independent of the unknown underlying probability density distribution, which is obviously superior to the existing estimators. Our results are not restricted to one-dimensional, but can also be extended to multivariate cases. It should be emphasized, however, that we do not provide a robust mathematical proof of these findings, and we leave these issues with those who are interested in them.

Sui, Ning; Li, Min; He, Ping

2014-12-01

294

Density estimation of small-mammal populations using a trapping web and distance sampling methods

Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.

1983-01-01

295

Estimating Column Density in Molecular Clouds with FIR and Sub-mm Emission Maps

We have used a numerical simulation of a turbulent cloud to synthesize maps of the thermal emission from dust at a variety of far-IR and sub-mm wavelengths. The average column density and external radiation field in the simulation is well matched to clouds such as Perseus and Ophiuchus. We use pairs of single-wavelength emission maps to derive the dust color temperature and column density, and we compare the derived column densities with the true column density. We demonstrate that longer wavelength emission maps yield less biased estimates of column density than maps made towards the peak of the dust emission spectrum. We compare the scatter in the derived column density with the observed scatter in Perseus and Ophiuchus. We find that while in Perseus all of the observed scatter in the emission-derived versus the extinction-derived column density can be attributed to the flawed assumption of isothermal dust along each line of sight, in Ophiuchus there is additional scatter above what can be explained by the isothermal assumption. Our results imply that variations in dust emission properties within a molecular cloud are not necessarily a major source of uncertainty in column density measurements.

S. Schnee; T. Bethell; A. Goodman

2006-02-13

296

Wavelet-based compression of medical images: filter-bank selection and evaluation.

Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

Saffor, A; bin Ramli, A R; Ng, K H

2003-06-01

297

On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

NASA Astrophysics Data System (ADS)

Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

2014-05-01

298

Non-Gaussian probabilistic MEG source localisation based on kernel density estimation?

There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

2014-01-01

299

Non-Gaussian probabilistic MEG source localisation based on kernel density estimation.

There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

Mohseni, Hamid R; Kringelbach, Morten L; Woolrich, Mark W; Baker, Adam; Aziz, Tipu Z; Probert-Smith, Penny

2014-02-15

300

Estimating absolute salinity (SA) in the world's oceans using density and composition

NASA Astrophysics Data System (ADS)

The practical (Sp) and reference (SR) salinities do not account for variations in physical properties such as density and enthalpy. Trace and minor components of seawater, such as nutrients or inorganic carbon affect these properties. This limitation has been recognized and several studies have been made to estimate the effect of these compositional changes on the conductivity-density relationship. These studies have been limited in number and geographic scope. Here, we combine the measurements of previous studies with new measurements for a total of 2857 conductivity-density measurements, covering all of the world's major oceans, to derive empirical equations for the effect of silica and total alkalinity on the density and absolute salinity of the global oceans and to recommend an equation applicable to most of the world's oceans. The potential impact on salinity as a result of uptake of anthropogenic CO2 is also discussed.

Woosley, Ryan J.; Huang, Fen; Millero, Frank J.

2014-11-01

301

Wavelet-based SAR images despeckling using joint hidden Markov model

NASA Astrophysics Data System (ADS)

In the past few years, wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the deficiency for taking account of intrascale correlations that exist among neighboring wavelet coefficients. In this paper, we propose to develop a joint hidden Markov model by fusing the wavelet Bayesian denoising technique with an image regularization procedure based on HMT and Markov random field (MRF). The Expectation Maximization algorithm is used to estimate hyperparameters and specify the mixture model. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. It is shown that the joint method outperforms lee filter and standard HMT techniques in terms of the integrative measure of the equivalent number of looks (ENL) and Pratt's figure of merit(FOM), especially when dealing with speckle noise in large variance.

Li, Qiaoliang; Wang, Guoyou; Liu, Jianguo; Chen, Shaobo

2007-11-01

302

Density estimation in a wolverine population using spatial capture-recapture models

Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.

2011-01-01

303

RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

2015-01-01

304

Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data

Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy in a model-independent manner, we allow the density of dark energy, $\\rho_X(z)$, to be an arbitrary function of redshift. Using simulated data from a space-based supernova pencil beam survey, we find that by optimizing the number of parameters used to parametrize the dimensionless dark energy density, $f(z)=\\rho_X(z)/\\rho_X(z=0)$, we can obtain an unbiased estimate of both f(z) and $\\Omega_m$ (assuming a flat universe and that the weak energy condition is satisfied). A plausible supernova pencil beam survey (with a square degree field of view and for an observational duration of one year) can yield about 2000 SNe Ia with $0\\le z \\le 2$. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of $\\sigma (m_{int})=0.16$ mag. We find that for such an idealized survey, $\\Omega_m$ can be measured to 10% accuracy, and f(z) can be estimated to $\\sim$ 20% to $z \\sim 1.5$, and $\\sim$ 20-40% to $z \\sim 2$, depending on the time dependence of the true dark energy density. Dark energy densities which vary more slowly can be more accurately measured. For the anticipated SNAP mission, $\\Omega_m$ can be measured to 14% accuracy, and f(z) can be estimated to $\\sim$ 20% to $z \\sim 1.2$. Our results suggest that SNAP may gain much sensitivity to the time-dependence of f(z) and $\\Omega_m$ by devoting more observational time to the central pencil beam fields to obtain more SNe Ia at z>1.2. We also find that Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis. (abridged)

Yun Wang; Geoffrey Lovelace

2001-09-17

305

Scatterer number density considerations in reference phantom-based attenuation estimation.

Attenuation estimation and imaging have the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue-mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and a signal-to-noise ratio (SNR) approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1647 regions of interest in five ex vivo bovine livers, we obtained an envelope SNR of 1.10 ± 0.12 when the tissue was imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article, we examine attenuation estimation in numerical phantoms, tissue-mimicking phantoms with variable SNDs and ex vivo bovine liver before and after thermal coagulation. We find that reference phantom-based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SNDs, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find that the standard deviation of attenuation slope estimates increases from 0.07 to 0.25 dB/cm-MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in tissue-mimicking phantoms with a large estimation kernel size (16 mm axially × 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (<0.01 dB/cm-MHz). We also compare results obtained with reference phantom-based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800

Rubert, Nicholas; Varghese, Tomy

2014-07-01

306

Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ? 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ? 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

2012-01-01

307

NSDL National Science Digital Library

What is Density? Density is the amount of "stuff" in a given "space". In science terms that means the amount of "mass" per unit "volume". Using units that means the amount of "grams" per "centimeters cubed". Check out the following links and learn about density through song! Density Beatles Style Density Chipmunk Style Density Rap Enjoy! ...

Miss Witcher

2011-10-06

308

NASA Astrophysics Data System (ADS)

Pyroclastic density current deposits remobilized by water during periods of heavy rainfall trigger lahars (volcanic mudflows) that affect inhabited areas at considerable distance from volcanoes, even years after an eruption. Here we present an innovative approach to detect and estimate the thickness and volume of pyroclastic density current (PDC) deposits as well as erosional versus depositional environments. We use SAR interferometry to compare an airborne digital surface model (DSM) acquired in 2004 to a post eruption 2010 DSM created using COSMO-SkyMed satellite data to estimate the volume of 2010 Merapi eruption PDC deposits along the Gendol river (Kali Gendol, KG). Results show PDC thicknesses of up to 75 m in canyons and a volume of about 40 × 106 m3, mainly along KG, and at distances of up to 16 km from the volcano summit. This volume estimate corresponds mainly to the 2010 pyroclastic deposits along the KG - material that is potentially available to produce lahars. Our volume estimate is approximately twice that estimated by field studies, a difference we consider acceptable given the uncertainties involved in both satellite- and field-based methods. Our technique can be used to rapidly evaluate volumes of PDC deposits at active volcanoes, in remote settings and where continuous activity may prevent field observations.

Bignami, Christian; Ruch, Joel; Chini, Marco; Neri, Marco; Buongiorno, Maria Fabrizia; Hidayati, Sri; Sayudi, Dewi Sri; Surono

2013-07-01

309

We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

Somershoe, S.G.; Twedt, D.J.; Reid, B.

2006-01-01

310

Estimating the Galactic Coronal Density via Ram-Pressure Stripping from Dwarf Satellites

NASA Astrophysics Data System (ADS)

Cosmological simulations and theories of galaxy formation predict that the Milky Way should be embedded in an extended hot gaseous halo or corona. To date, a definitive detection of such a corona in the Milky Way remains elusive. We have attempted to estimate the density of the Milky Way's cosmological corona using the effect that it has on the surrounding population of dwarf galaxies. We have considered two dSphs close to the Galaxy: Sextans and Carina. Assuming that they have lost all their gas during the last pericentric passage via ram-pressure stripping, we were able to estimate the average density (n ˜ 2 ? 10-4 cm-3) of the corona at a distance of ˜ 70 kpc from the Milky Way. If we consider an isothermal profile and extrapolate it at large radii, the corona could contain a significant fraction of the missing baryons associated to the Milky Way.

Gatto, A.; Fraternali, F.; Marinacci, F.; Read, J.; Lux, H.

311

Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

2011-01-01

312

Estimation of the high-latitude topside electron heat flux using DMSP plasma density measurements

The high-latitude ionosphere interfaces with the hot, tenuous, magnetospheric plasma, and a heat flow into the ionosphere is expected, which has a large impact on the plasma densities and temperatures in the high-latitude ionosphere. The value of this magnetospheric heat flux is unknown. In an effort to estimate the value of the magnetospheric heat flux into the high-latitude ionosphere, and

Hamed A. Bekerat; Robert W. Schunk; Ludger Scherliess

2007-01-01

313

NASA Astrophysics Data System (ADS)

The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

2013-05-01

314

NASA Astrophysics Data System (ADS)

A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

2012-06-01

315

Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions. PMID:24746022

McCabe, Patrick; Korb, Oliver; Cole, Jason

2014-05-27

316

A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

NASA Astrophysics Data System (ADS)

The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.

Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

2014-05-01

317

Examining the impact of the precision of address geocoding on estimated density of crime locations

NASA Astrophysics Data System (ADS)

This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.

Harada, Yutaka; Shimada, Takahito

2006-10-01

318

Nonparametric Bayesian density estimation on manifolds with applications to planar shapes

Summary Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback–Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches. PMID:22822255

Bhattacharya, Abhishek; Dunson, David B.

2010-01-01

319

NASA Astrophysics Data System (ADS)

Performance of algorithms for target signal detection in Hyperspectral Imagery (HSI) is often deteriorated when the data is neither statistically homogeneous nor Gaussian or when its Joint Probability Density (JPD) does not match any presumed particular parametric model. In this paper we propose a novel detection algorithm which first attempts at dividing data domain into mostly Gaussian and mostly Non-Gaussian (NG) subspaces, and then estimates the JPD of the NG subspace with a non-parametric Graph-based estimator. It then combines commonly used detection algorithms operating on the mostly-Gaussian sub-space and an LRT calculated directly with the estimated JPD of the NG sub-space, to detect anomalies and known additive-type target signals. The algorithm performance is compared to commonly used algorithms and is found to be superior in some important cases.

Tidhar, G. A.; Rotman, S. R.

2013-05-01

320

Examination of the influence of data aggregation and sampling density on spatial estimation

NASA Astrophysics Data System (ADS)

Spatial processes may be sampled by point sampling or by aggregate sampling. If aggregate samples are collected over a regular grid and used to represent the central point of each aggregation area, the aggregate sampling functions as a low-pass filter and may eliminate aliasing during spatial estimation. To assess potential accuracy improvements, a numerical procedure for calculating the estimation error variance was developed. Analysis of point and block sampling techniques for kriging and inverse distance interpolation showed that for the same sampling density, block sampling provides better estimation. To achieve the same error levels, over 30%-50% more point samples were required than block samples. Furthermore, interpolation of block sampled data resulted in lower error variability and surfaces with more visual appeal.

Vucetic, Slobodan; Fiez, Tim; Obradovic, Zoran

2000-12-01

321

NASA Astrophysics Data System (ADS)

Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.

Erkyihun, S. T.

2013-12-01

322

Measurement and estimation for density of NaNO2-KNO3NaNO3 ternary molten salts

The densities of NaNO2-KNO3-NaNO3 ternary molten salts system were measured by Archimedean law. A method of density estimation was introduced. The result shows that the molar volume is additive in NaNO2-KNO3-NaNO3 ternary molten salts mixtures. According to that, an estimating equation for its density was obtained and it showed good agreement with the experimental data.

Fengguo Liu; Bingliang Gao; Shixing Wang; Zhaowen Wang; Zhongning Shi

2009-01-01

323

A New Robust Approach for Highway Traffic Density Estimation Fabio Morbidi, Luis LeÂ´on Ojeda for the uncertain graph-constrained Switching Mode Model (SMM), which we use to describe the highway traffic density density reconstruction via a switching observer, in an instrumented 2.2 km highway section of Grenoble

Paris-Sud XI, UniversitÃ© de

324

NSDL National Science Digital Library

What is density? Density is a relationship between mass (usually in grams or kilograms) and volume (usually in L, mL or cm 3 ). Below are several sights to help you further understand the concept of density. Click the following link to review the concept of density. Be sure to read each slide and watch each video: Chemistry Review: Density Watch the following video: Pop density video The following is a fun interactive sight you can use to review density. Your job is #1, to play and #2 to calculate the density of the ...

Mr. Hansen

2010-10-26

325

Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L?1. The highest density observed was ?3 million zoospores L?1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

2014-01-01

326

Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ?3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

2014-01-01

327

We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

Crowe, D.E.; Longshore, K.M.

2010-01-01

328

Very little information is known of the recently described Microcebus tavaratra and Lepilemur milanoii in the Daraina region, a restricted area in far northern Madagascar. Since their forest habitat is highly fragmented and expected to undergo significant changes in the future, rapid surveys are essential to determine conservation priorities. Using both distance sampling and capture-recapture methods, we estimated population densities in two forest fragments. Our results are the first known density and population size estimates for both nocturnal species. In parallel, we compare density results from four different approaches, which are widely used to estimate lemur densities and population sizes throughout Madagascar. Four approaches (King, Kelker, Muller and Buckland) are based on transect surveys and distance sampling, and they differ from each other by the way the effective strip width is estimated. The fifth method relies on a capture-mark-recapture (CMR) approach. Overall, we found that the King method produced density estimates that were significantly higher than other methods, suggesting that it generates overestimates and hence overly optimistic estimates of population sizes in endangered species. The other three distance sampling methods provided similar estimates. These estimates were similar to those obtained with the CMR approach when enough recapture data were available. Given that Microcebus species are often trapped for genetic or behavioral studies, our results suggest that existing data can be used to provide estimates of population density for that species across Madagascar. PMID:22311681

Meyler, Samuel Viana; Salmona, Jordi; Ibouroi, Mohamed Thani; Besolo, Aubin; Rasolondraibe, Emmanuel; Radespiel, Ute; Rabarivola, Clément; Chikhi, Lounes

2012-05-01

329

The objective of this paper is to present a secure distribution method to distribute healthcare records (e.g. video streams and digitized image scans). The availability of prompt and expert medical care can meaningfully improve health care services in understaffed rural and remote areas, sharing of available facilities, and medical records referral. Here, a secure method is developed for distributing healthcare records, using a two-step wavelet based technique; first, a 2-level db8 wavelets transform for textual elimination, and later a 4-level db8 wavelets transform for digital watermarking. The first db8 wavelets are used to detect and eliminate textual information found on images for protecting data privacy and confidentiality. The second db8 wavelets are to secure and impose imperceptible marks to identify the owner; track authorized users, or detects malicious tampering of documents. Experiments were performed on different digitized image scans. The experimental results have illustrated that both wavelet-based methods are conceptually simple and able to effectively detect textual information while our watermark technique is robust to noise and compression. PMID:17282675

Yee Lau, Phooi; Ozawa, Shinji

2005-01-01

330

NASA Astrophysics Data System (ADS)

Wavelet-based methods for multiple hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasize convergence between methods of wavelet thresholding or shrinkage and the problem of multiple hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on ensuring a trade off between type I probability error control and power dissipation. We describe a technique for controlling the false discovery rate at an arbitrary level of type 1 error in testing multiple wavelet coefficients generated by a 2D discrete wavelet transform (DWT) of spatial maps of {fMRI} time series statistics. We also describe and apply recursive testing methods that can be used to define a threshold unique to each level and orientation of the 2D-DWT. Bayesian methods, incorporating a formal model for the anticipated sparseness of wavelet coefficients representing the signal or true image, are also tractable. These methods are comparatively evaluated by analysis of "null" images (acquired with the subject at rest), in which case the number of positive tests should be exactly as predicted under the hull hypothesis, and an experimental dataset acquired from 5 normal volunteers during an event-related finger movement task. We show that all three wavelet-based methods of multiple hypothesis testing have good type 1 error control (the FDR method being most conservative) and generate plausible brain activation maps.

Fadili, Jalal M.; Bullmore, Edward T.

2003-11-01

331

The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

Boersen, M.R.; Clark, J.D.; King, T.L.

2003-01-01

332

On the method of logarithmic cumulants for parametric probability density function estimation.

Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

2013-10-01

333

NASA Astrophysics Data System (ADS)

A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (?15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

Waters, Daniel F.; Cadou, Christopher P.

2014-02-01

334

NASA Astrophysics Data System (ADS)

In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzi?, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

Terzi?, Balša; Bassi, Gabriele

2011-07-01

335

Axonal and dendritic density field estimation from incomplete single-slice neuronal reconstructions

Neuronal information processing in cortical networks critically depends on the organization of synaptic connectivity. Synaptic connections can form when axons and dendrites come in close proximity of each other. The spatial innervation of neuronal arborizations can be described by their axonal and dendritic density fields. Recently we showed that potential locations of synapses between neurons can be estimated from their overlapping axonal and dendritic density fields. However, deriving density fields from single-slice neuronal reconstructions is hampered by incompleteness because of cut branches. Here, we describe a method for recovering the lost axonal and dendritic mass. This so-called completion method is based on an estimation of the mass inside the slice and an extrapolation to the space outside the slice, assuming axial symmetry in the mass distribution. We validated the method using a set of neurons generated with our NETMORPH simulator. The model-generated neurons were artificially sliced and subsequently recovered by the completion method. Depending on slice thickness and arbor extent, branches that have lost their outside parents (orphan branches) may occur inside the slice. Not connected anymore to the contiguous structure of the sliced neuron, orphan branches result in an underestimation of neurite mass. For 300 ?m thick slices, however, the validation showed a full recovery of dendritic and an almost full recovery of axonal mass. The completion method was applied to three experimental data sets of reconstructed rat cortical L2/3 pyramidal neurons. The results showed that in 300 ?m thick slices intracortical axons lost about 50% and dendrites about 16% of their mass. The completion method can be applied to single-slice reconstructions as long as axial symmetry can be assumed in the mass distribution. This opens up the possibility of using incomplete neuronal reconstructions from open-access data bases to determine population mean mass density fields. PMID:25009472

van Pelt, Jaap; van Ooyen, Arjen; Uylings, Harry B. M.

2014-01-01

336

Transverse energy scaling and energy density estimates from sup 16 O- and sup 32 S-induced reactions

We discuss the dependence of transverse energy production on projectile mass, target mass, and on the impact parameter of the heavy ion reaction. The transverse energy is shown to scale with the number of participating nucleons. Various methods to estimate the attained energy density from the observed transverse energy are discussed. It is shown that the systematics of the energy density estimates suggest averages of 2--3 GeV/fm{sup 3} rather than the much higher values attained by assuming Landau-stopping initial conditions. Based on the observed scaling of the transverse energy, an initial energy density profile may be estimated. 14 refs., 4 figs.

Not Available

1989-01-01

337

We discuss the dependence of transverse energy production on projectile mass, target mass, and on the impact parameter of the heavy ion reaction. The transverse energy is shown to scale with the number of participating nucleons. Various methods to estimate the attained energy density from the observed transverse energy are discussed. It is shown that the systematics of the energy density estimates suggest average of 2-3 GeV/fm/sup 3/ rather than the much higher values attained by assuming Landau-stopping initial conditions. Based on the observed scaling of the transverse energy, an initial energy density profile may be estimated. 11 refs., 4 figs.

Awes, T.C.; Albrecht, R.; Baktash, C.; Beckmann, P.; Berger, F.; Bock, R.; Claesson, G.; Clewing, G.; Dragon, L.; Eklund, A.

1989-01-01

338

Haze effect removal from image via haze density estimation in optical model.

Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior. PMID:24216937

Yeh, Chia-Hung; Kang, Li-Wei; Lee, Ming-Sui; Lin, Cheng-Yang

2013-11-01

339

Estimates of Leaf Vein Density Are Scale Dependent1[C][W][OPEN

Leaf vein density (LVD) has garnered considerable attention of late, with numerous studies linking it to the physiology, ecology, and evolution of land plants. Despite this increased attention, little consideration has been given to the effects of measurement methods on estimation of LVD. Here, we focus on the relationship between measurement methods and estimates of LVD. We examine the dependence of LVD on magnification, field of view (FOV), and image resolution. We first show that estimates of LVD increase with increasing image magnification and resolution. We then demonstrate that estimates of LVD are higher with higher variance at small FOV, approaching asymptotic values as the FOV increases. We demonstrate that these effects arise due to three primary factors: (1) the tradeoff between FOV and magnification; (2) geometric effects of lattices at small scales; and; (3) the hierarchical nature of leaf vein networks. Our results help to explain differences in previously published studies and highlight the importance of using consistent magnification and scale, when possible, when comparing LVD and other quantitative measures of venation structure across leaves. PMID:24259686

Price, Charles A.; Munro, Peter R.T.; Weitz, Joshua S.

2014-01-01

340

This paper aims at estimating causal relationships between signals to detect flow propagation in autoregressive and physiological models. The main challenge of the ongoing work is to discover whether neural activity in a given structure of the brain influences activity in another area during epileptic seizures. This question refers to the concept of effective connectivity in neuroscience, i.e. to the identification of information flows and oriented propagation graphs. Past efforts to determine effective connectivity rooted to Wiener causality definition adapted in a practical form by Granger with autoregressive models. A number of studies argue against such a linear approach when nonlinear dynamics are suspected in the relationship between signals. Consequently, nonlinear nonparametric approaches, such as transfer entropy (TE), have been introduced to overcome linear methods limitations and promoted in many studies dealing with electrophysiological signals. Until now, even though many TE estimators have been developed, further improvement can be expected. In this paper, we investigate a new strategy by introducing an adaptive kernel density estimator to improve TE estimation. PMID:24110694

Zuo, K; Bellanger, J J; Yang, C; Shu, H; Le Bouquin Jeannés, R

2013-01-01

341

Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID:23209547

Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.

2012-01-01

342

NASA Astrophysics Data System (ADS)

Convective cells are cloud formations whose growth, maturation and dissipation are of great interest among meteorologists since they are associated with severe storms with large precipitation structures. Some works suggest a strong correlation between lightning occurrence and convective cells. The current work proposes a new approach to analyze the correlation between precipitation and lightning, and to identify electrically active cells. Such cells may be employed for tracking convective events in the absence of weather radar coverage. This approach employs a new spatio-temporal clustering technique based on a temporal sliding-window and a standard kernel density estimation to process lightning data. Clustering allows the identification of the cells from lightning data and density estimation bounds the contours of the cells. The proposed approach was evaluated for two convective events in Southeast Brazil. Image segmentation of radar data was performed to identify convective precipitation structures using the Steiner criteria. These structures were then compared and correlated to the electrically active cells in particular instants of time for both events. It was observed that most precipitation structures have associated cells, by comparing the ground tracks of their centroids. In addition, for one particular cell of each event, its temporal evolution was compared to that of the associated precipitation structure. Results show that the proposed approach may improve the use of lightning data for tracking convective events in countries that lack weather radar coverage.

Strauss, Cesar; Rosa, Marcelo Barbio; Stephany, Stephan

2013-12-01

343

Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

2014-01-01

344

We report on Transition Region And Coronal Explorer 171 A observations of the GOES X20 class flare on 2001 April 2 that shows EUV flare ribbons with intense diffraction patterns. Between the 11th to 14th order, the diffraction patterns of the compact flare ribbon are dispersed into two sources. The two sources are identified as emission from the Fe IX line at 171.1 A and the combined emission from Fe X lines at 174.5, 175.3, and 177.2 A. The prominent emission of the Fe IX line indicates that the EUV-emitting ribbon has a strong temperature component near the lower end of the 171 A temperature response ({approx}0.6-1.5 MK). Fitting the observation with an isothermal model, the derived temperature is around 0.65 MK. However, the low sensitivity of the 171 A filter to high-temperature plasma does not provide estimates of the emission measure for temperatures above {approx}1.5 MK. Using the derived temperature of 0.65 MK, the observed 171 A flux gives a density of the EUV ribbon of 3 x 10{sup 11} cm{sup -3}. This density is much lower than the density of the hard X-ray producing region ({approx}10{sup 13} to 10{sup 14} cm{sup -3}) suggesting that the EUV sources, though closely related spatially, lie at higher altitudes.

Krucker, Saem; Raftery, Claire L.; Hudson, Hugh S., E-mail: krucker@ssl.berkeley.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States)

2011-06-10

345

Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

Dorazio, Robert M.

2013-01-01

346

NASA Astrophysics Data System (ADS)

This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

2009-09-01

347

SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.

In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach. PMID:24807522

Cao, Yuan; He, Haibo; Man, Hong

2012-08-01

348

Bayes and Empirical Bayes Estimators of Abundance and Density from Spatial Capture-Recapture Data

In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325

Dorazio, Robert M.

2013-01-01

349

Monte Carlo Mesh Tallies based on a Kernel Density Estimator Approach

NASA Astrophysics Data System (ADS)

Kernel density estimators (KDE) are considered for use with the Monte Carlo transport method as an alternative to conventional methods for solving fixed-source problems on arbitrary 3D input meshes. Since conventional methods produce a piecewise constant approximation, their accuracy can suffer when using coarse meshes to approximate neutron flux distributions with strong gradients. Comparatively, KDE mesh tallies produce point estimates independently of the mesh structure, which means that their values will not change even if the mesh is refined. A new KDE integral-track estimator is introduced in this dissertation for use with mesh tallies. Two input parameters are needed, namely a bandwidth and kernel. The bandwidth is equivalent to choosing mesh cell size, whereas the kernel determines the weight of each contribution with respect to its distance from the calculation point being evaluated. The KDE integral-track estimator is shown to produce more accurate results than the original KDE track length estimator, with no performance penalty, and identical or comparable results to conventional methods. However, unlike conventional methods, KDE mesh tallies can use different bandwidths and kernels to improve accuracy without changing the input mesh. This dissertation also explores the accuracy and efficiency of the KDE integral-track mesh tally in detail. Like other KDE applications, accuracy is highly dependent on the choice of bandwidth. This choice becomes even more important when approximating regions of the neutron flux distribution with high curvature, where changing the bandwidth is much more sensitive. Other factors that affect accuracy include properties of the kernel, and the boundary bias effect for calculation points near external geometrical boundaries. Numerous factors also affect efficiency, with the most significant being the concept of the neighborhood region. The neighborhood region determines how many calculation points are expected to add non-trivial contributions, which depends on node density, bandwidth, kernel, and properties of the track being tallied. The KDE integral-track mesh tally is a promising alternative for solving fixed-source problems on arbitrary 3D input meshes. Producing results at specific points rather than cell-averaged values allows a more accurate representation of the neutron flux distribution to be obtained, especially when coarser meshes are used.

Dunn, Kerry L.

350

NASA Technical Reports Server (NTRS)

Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

Matic, Roy M.; Mosley, Judith I.

1994-01-01

351

NASA Astrophysics Data System (ADS)

Abstract: In this paper, a wavelet-based neural network system for the detection and identification of four types of VLF whistler’s transients (i.e. dispersive, diffuse, spiky and multipath) is implemented and tested. The discrete wavelet transform (DWT) technique is integrated with the feed forward neural network (FFNN) model to construct the identifier. First, the multi-resolution analysis (MRA) technique of DWT and the Parseval’s theorem are employed to extract the characteristics features of the transients at different resolution levels. Second, the FFNN identifies these extracted features to identify the transients according to the features extracted. The proposed methodology can reduce a great quantity of the features of transients without losing its original property; less memory space and computing time are required. Various transient events are tested; the results show that the identifier can detect whistler transients efficiently. Keywords: Discrete wavelets transform, Multi-resolution analysis, Parseval’s theorem and Feed forward neural network

Sondhiya, Deepak Kumar; Gwal, Ashok Kumar; Verma, Shivali; Kasde, Satish Kumar

352

A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

NASA Astrophysics Data System (ADS)

This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

2014-09-01

353

A Recursive Wavelet-based Strategy for Real-Time Cochlear Implant Speech Processing on PDA Platforms

This paper presents a wavelet-based speech coding strategy for cochlear implants. In addition, it describes the real-time implementation of this strategy on a PDA platform. Three wavelet packet decomposition tree structures are considered and their performance in terms of computational complexity, spectral leakage, fixed-point accuracy, and real-time processing are compared to other commonly used strategies in cochlear implants. A real-time mechanism is introduced for updating the wavelet coefficients recursively. It is shown that the proposed strategy achieves higher analysis rates than the existing strategies while being able to run in real-time on a PDA platform. In addition, it is shown that this strategy leads to a lower amount of spectral leakage. The PDA implementation is made interactive to allow users to easily manipulate the parameters involved and study their effects. PMID:20403778

Gopalakrishna, Vanishree; Kehtarnavaz, Nasser; Loizou, Philipos C.

2011-01-01

354

NSDL National Science Digital Library

This web page introduces the concepts of density and buoyancy. The discovery in ancient Greece by Archimedes is described. The densities of various materials are given and temperature effects introduced. Links are provided to news and other resources related to mass density. This is part of the Vision Learning collection of short online modules covering topics in a broad range of science and math topics.

Day, Martha M.

2008-05-26

355

The (maximum) penalized-likelihood method of probability density estimation and bump-hunting is improved and exemplified by applications to scattering and chondrite data. We show how the hyperparameter in the method can be satisfactorily estimated by using statistics of goodness of fit. A Fourier expansion is found to be usually more expeditious than a Hermite expansion but a compromise is useful. The

I. J. Good; R. A. Gaskins

1980-01-01

356

Several studies have attempted to compare subtidal animal population estimates obtained in a variety of ways using SCUBA diving and have reported a lot of variation between the estimates obtained. This study investigated individually scale-, tidal-, equipment- and observer-induced variation through analysis of animal population density indices obtained using a number of techniques based on SCUBA diver visual survey. The

MDJ Sayer; C Poonian

2007-01-01

357

Estuarine budget studies often suffer from uncertainties of net flux estimates in view of large temporal and spatial variabilities. Optimum spatial measurement density and material flux errors for a reasonably well mixed estuary were estimated by sampling 10 stations from surface to bottom simultaneously every hour for two tidal cycles in a 320-m-wide cross section in North Inlet, South Carolina.

Bjtirn Kjerfve; L. HAROLD STEVENSON; JEFFREY A. PROEHL; THOMAS H. CHRZANOWSKI; WILEY M. KITCHENS

1981-01-01

358

The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088

Garde, Ainara; Karlen, Walter; Ansermino, J. Mark; Dumont, Guy A.

2014-01-01

359

The genomic RNA of hepatitis C virus (HCV) in the plasma of volunteer blood donors was detected by using the polymerase chain reaction in a fraction of density 1-08 g\\/ml from sucrose density gradient equilibrium centrifugation. When the fraction was treated with the detergent NP40 and recentrifuged in sucrose, the HCV RNA banded at 1.25g\\/ml. Assuming that NP40 removed a

Hideaki Miyamoto; Hiroaki Okamoto; Koei Sato; Takeshi Tanaka; Shunji Mishiro

1992-01-01

360

Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion

NASA Technical Reports Server (NTRS)

Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.

Wilson, L.; Self, S.

1980-01-01

361

NASA Astrophysics Data System (ADS)

Needle insertion planning for digital breast tomosynthesis (DBT) guided biopsy has the potential to improve patient comfort and intervention safety. However, a relevant planning should take into account breast tissue deformation and lesion displacement during the procedure. Deformable models, like finite elements, use the elastic characteristics of the breast to evaluate the deformation of tissue during needle insertion. This paper presents a novel approach to locally estimate the Young's modulus of the breast tissue directly from the DBT data. The method consists in computing the fibroglandular percentage in each of the acquired DBT projection images, then reconstructing the density volume. Finally, this density information is used to compute the mechanical parameters for each finite element of the deformable mesh, obtaining a heterogeneous DBT based breast model. Preliminary experiments were performed to evaluate the relevance of this method for needle path planning in DBT guided biopsy. The results show that the heterogeneous DBT based breast model improves needle insertion simulation accuracy in 71% of the cases, compared to a homogeneous model or a binary fat/fibroglandular tissue model.

Vancamberg, Laurence; Geeraert, Nausikaa; Iordache, Razvan; Palma, Giovanni; Klausz, Rémy; Muller, Serge

2011-03-01

362

Density estimation in aerial images of large crowds for automatic people counting

NASA Astrophysics Data System (ADS)

Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

Herrmann, Christian; Metzler, Juergen

2013-05-01

363

Robust estimation of mammographic breast density: a patient-based approach

NASA Astrophysics Data System (ADS)

Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).

Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas

2012-02-01

364

Estimating locations of quantum-dot-encoded microparticles from ultra-high density 3-D microarrays.

We develop a maximum likelihood (ML)-based parametric image deconvolution technique to locate quantum-dot (q-dot) encoded microparticles from three-dimensional (3-D) images of an ultra-high density 3-D microarray. A potential application of the proposed microarray imaging is assay analysis of gene, protein, antigen, and antibody targets. This imaging is performed using a wide-field fluorescence microscope. We first describe our problem of interest and the pertinent measurement model by assuming additive Gaussian noise. We use a 3-D Gaussian point-spread-function (PSF) model to represent the blurring of the widefield microscope system. We employ parametric spheres to represent the light intensity profiles of the q-dot-encoded microparticles. We then develop the estimation algorithm for the single-sphere-object image assuming that the microscope PSF is totally unknown. The algorithm is tested numerically and compared with the analytical Cramér-Rao bounds (CRB). To apply our analysis to real data, we first segment a section of the blurred 3-D image of the multiple microparticles using a k-means clustering algorithm, obtaining 3-D images of single-sphere-objects. Then, we process each of these images using our proposed estimation technique. In the numerical examples, our method outperforms the blind deconvolution (BD) algorithms in high signal-to-noise ratio (SNR) images. For the case of real data, our method and the BD-based methods perform similarly for the well-separated microparticle images. PMID:19203872

Sarder, Pinaki; Nehorai, Arye

2008-12-01

365

NASA Astrophysics Data System (ADS)

Spectral estimation of irregularly sampled velocity data issued from Laser Doppler Anemometry measurements is considered in this paper. A new method is proposed based on linear interpolation followed by a deconvolution procedure. In this method, the analytic expression of the autocorrelation function of the interpolated data is expressed as a linear function of the autocorrelation function of the data to be estimated. For the analysis of both simulated and experimental data, the results of the proposed method is compared with the one of the reference methods in LDA: refinement of autocorrelation function of sample-and-hold interpolated signal method given by Nobach et al. (Exp Fluids 24:499-509, 1998), refinement of power spectral density of sample-and-hold interpolated signal method given by Simon and Fitzpatrick (Exp Fluids 37:272-280, 2004) and fuzzy slotting technique with local normalization and weighting algorithm given by Nobach (Exp Fluids 32:337-345, 2002). Based on these results, it is concluded that the performances of the proposed method are better than the one of the other methods, especially for what concerns bias and variance.

Moreau, S.; Plantier, G.; Valière, J.-C.; Bailliet, H.; Simon, L.

2011-01-01

366

Estimating basin thickness using a high-density passive-source geophone array

NASA Astrophysics Data System (ADS)

In 2010 an array of 834 single-component geophones was deployed across the Bighorn Mountain Range in northern Wyoming as part of the Bighorn Arch Seismic Experiment (BASE). The goal of this deployment was to test the capabilities of these instruments as recorders of passive-source observations in addition to active-source observations for which they are typically used. The results are quite promising, having recorded 47 regional and teleseismic earthquakes over a two-week deployment. These events ranged from magnitude 4.1 to 7.0 (mb) and occurred at distances up to 10°. Because these instruments were deployed at ca. 1000 m spacing we were able to resolve the geometries of two major basins from the residuals of several well-recorded teleseisms. The residuals of these arrivals, converted to basinal thickness, show a distinct westward thickening in the Bighorn Basin that agrees with industry-derived basement depth information. Our estimates of thickness in the Powder River Basin do not match industry estimates in certain areas, likely due to localized high-velocity features that are not included in our models. Thus, with a few cautions, it is clear that high-density single-component passive arrays can provide valuable constraints on basinal geometries, and could be especially useful where basinal geometry is poorly known.

O'Rourke, C. T.; Sheehan, A. F.; Erslev, E. A.; Miller, K. C.

2014-09-01

367

New Estimates on the EKB Dust Density using the Student Dust Counter

NASA Astrophysics Data System (ADS)

The Student Dust Counter (SDC) is an impact dust detector on board the New Horizons Mission to Pluto. SDC was designed to resolve the mass of dust grains in the range of 10^-12 < m < 10^-9 g, covering an approximate size range of 0.5-10 um in particle radius. The measurements can be directly compared to the prediction of a grain tracing trajectory model of dust originating from the Edgeworth-Kuiper Belt. SDC's results as well as data taken by the Pioneer 10 dust detector are compared to our model to derive estimates for the mass production rate and the ejecta mass distribution power law exponent. Contrary to previous studies, the assumption that all impacts are generated by grains on circular Keplerian orbits is removed, allowing for a more accurate determination of the EKB mass production rate. With these estimates, the speed and mass distribution of EKB grains entering atmospheres of outer solar system bodies can be calculated. Through December 2013, the New Horizons spacecraft reached approximately 28 AU, enabling SDC to map the dust density distribution of the solar system farther than any previous dust detector.

Szalay, J.; Horanyi, M.; Poppe, A. R.

2013-12-01

368

Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

NASA Technical Reports Server (NTRS)

This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

2010-01-01

369

The canopy reflectance of winter wheat infected with different severity yellow rust was collected in the fields and canopy chlorophyll density (CCD) of the whole wheat was measured in the laboratory. The correlation was analyzed between hyperspectral indices and CCDs, the indices with relationship coefficients more than 0. 7 were selected to build the inversion models, and comparing the predicted results and measured results to test the models, the results showed the first derivative index (D750-D550)/(D750+D550) has higher prediction precision than other indices, while the next is first derivative index (D725-D702)/(D725+D702). Saturation analysis was performed for the above indices, the result indicated that when CCD was more than 12 microg x cm(-2), the first derivative index (D750-D550)/(D750+D550) was easiest to get to saturation level. Therefore, when CCD was less than 12 microg x cm(-2), the first derivative index (D750-D550)/(D750+D550) could be used to estimate wheat CCD and had higher prediction precision than other indices; and when CCD was more than 12 microg x cm(-2), the first derivative index (D725-D702)/(D725+D702) was not easiest to reach saturation level and could be used to estimate wheat CCD. There is a significant minus cor relation between CCD and disease index (DI), moreover, accurate estimation of CCD by using hyperspectral remote sensing not only can monitor wheat growth, but also can provide assistant information for identification of wheat disease. Therefore, this study has important meaning for prevention and reduction of disaster in agricultural field. PMID:20939349

Jiang, Jin-bao; Chen, Yun-hao; Huang, Wen-jiang

2010-08-01

370

Accuracy of Estimation of Genomic Breeding Values in Pigs Using Low-Density Genotypes and Imputation

Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65?0.68). Using genotypes imputed from a large reference panel (accuracy: R2 = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R2 = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation. PMID:24531728

Badke, Yvonne M.; Bates, Ronald O.; Ernst, Catherine W.; Fix, Justin; Steibel, Juan P.

2014-01-01

371

Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation. PMID:24531728

Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

2014-04-01

372

Background Runs of homozygosity are long, uninterrupted stretches of homozygous genotypes that enable reliable estimation of levels of inbreeding (i.e., autozygosity) based on high-throughput, chip-based single nucleotide polymorphism (SNP) genotypes. While the theoretical definition of runs of homozygosity is straightforward, their empirical identification depends on the type of SNP chip used to obtain the data and on a number of factors, including the number of heterozygous calls allowed to account for genotyping errors. We analyzed how SNP chip density and genotyping errors affect estimates of autozygosity based on runs of homozygosity in three cattle populations, using genotype data from an SNP chip with 777 972 SNPs and a 50 k chip. Results Data from the 50 k chip led to overestimation of the number of runs of homozygosity that are shorter than 4 Mb, since the analysis could not identify heterozygous SNPs that were present on the denser chip. Conversely, data from the denser chip led to underestimation of the number of runs of homozygosity that were longer than 8 Mb, unless the presence of a small number of heterozygous SNP genotypes was allowed within a run of homozygosity. Conclusions We have shown that SNP chip density and genotyping errors introduce patterns of bias in the estimation of autozygosity based on runs of homozygosity. SNP chips with 50 000 to 60 000 markers are frequently available for livestock species and their information leads to a conservative prediction of autozygosity from runs of homozygosity longer than 4 Mb. Not allowing heterozygous SNP genotypes to be present in a homozygosity run, as has been advocated for human populations, is not adequate for livestock populations because they have much higher levels of autozygosity and therefore longer runs of homozygosity. When allowing a small number of heterozygous calls, current software does not differentiate between situations where these calls are adjacent and therefore indicative of an actual break of the run versus those where they are scattered across the length of the homozygous segment. Simple graphical tests that are used in this paper are a current, yet tedious solution. PMID:24168655

2013-01-01

373

On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

NASA Astrophysics Data System (ADS)

We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions {mathcal{R}_g subset mathbb{C}} for which there can be uniform {L^{2n/n+2}to L^{2n/n-2}} resolvent estimates for {?_g + ?} , {? in mathcal{R}_g} , where {?_g} is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension {n ? 3} . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from {?} to the spectrum of {?_{S^n}} . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, {mathbb{T}^n = mathbb{R}^n/mathbb{Z}^n} , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for {(?_g + ?)^{-1}} . Ours is based on the related Hadamard parametrices for {\\cos t sqrt{-?_g}} , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of {sqrt{-?_g}} , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

2014-06-01

374

On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

NASA Astrophysics Data System (ADS)

We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions for which there can be uniform resolvent estimates for , , where is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from to the spectrum of . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for . Ours is based on the related Hadamard parametrices for , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

2015-02-01

375

Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

2014-01-01

376

Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

2014-01-01

377

The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

2002-01-01

378

NASA Astrophysics Data System (ADS)

This paper explores why the 'Auto-diametric method', currently used in many laboratories to quickly estimate fish fecundity, works well on marine species with a determinate reproductive style but much less so on species with an indeterminate reproductive style. Algorithms describing links between potentially important explanatory variables to estimate fecundity were first established, and these were followed by practical observations in order to validate the method under two extreme situations: 1) straightforward fecundity estimation in a determinate, single-batch spawner: Atlantic herring (AH) Clupea harengus and 2) challenging fecundity estimation in an indeterminate, multiple-batch spawner: Japanese flounder (JF) Paralichthys olivaceus. The Auto-diametric method relies on the successful prediction of the number of vitellogenic oocytes (VTO) per gram ovary (oocyte packing density; OPD) from the mean VTO diameter. Theoretically, OPD could be reproduced by the following four variables; OD V (volume-based mean VTO diameter, which deviates from arithmetic mean VTO diameter), VFvto (volume fraction of VTO in the ovary), ?o (specific gravity of the ovary) and k (VTO shape, i.e. ratio of long and short oocyte axes). VF vto, ? o and k were tested in relation to growth in OD V. The dynamic range throughout maturation was clearly highest in VF vto. As a result, OPD was mainly influenced by OD V and secondly by VFvto. Log (OPD) for AH decreased as log (OD V) increased, while log (OPD) for JF first increased during early vitellogenesis, then decreased during late vitellogenesis and spawning as log (OD V) increased. These linear regressions thus behaved statistically differently between species, and associated residuals fluctuated more for JF than for AH. We conclude that the OPD-OD V relationship may be better expressed by several curves that cover different parts of the maturation cycle rather than by one curve that cover all these parts. This seems to be particularly true for indeterminate spawners. A correction factor for vitellogenic atresia was included based on the level of atresia and the size of atretic oocytes in relation to normal oocytes finding that OPD would be biased when smaller atretic oocytes are present but not accounted for. Furthermore, special care should be taken when collecting sub-samples to make them as representative as possible of the whole ovary, including in terms of relative amount of ovarian wall and stroma. Theoretical consideration, along with original, high-quality information regarding the above-listed variables made it possible to reproduce very accurately the observed changes in OPD, but not yet precisely enough at the individual level in indeterminate spawners.

Kurita, Yutaka; Kjesbu, Olav S.

2009-02-01

379

X-Ray Methods to Estimate Breast Density Content in Breast Tissue

NASA Astrophysics Data System (ADS)

This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ? fibrous, lucite ? cancer and polyet ? fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating techniques and one photon counting technique. Contrast, signal to noise ratio (SNR) and pixel values between different regions of interest were analyzed. The two basis function method and two of the energy integrating methods (calibration, beam hardening correction) gave the highest and more linear curves for contrast and SNR.

Maraghechi, Borna

380

Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

1982-12-16

381

Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

NASA Astrophysics Data System (ADS)

We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

2012-12-01

382

NASA Astrophysics Data System (ADS)

Reliability of microseismic interpretations is very much dependent on how robustly microseismic events are detected and picked. Various event detection algorithms are available but detection of weak events is a common challenge. Apart from the event magnitude, hypocentral distance, and background noise level, the instrument self-noise can also act as a major constraint for the detection of weak microseismic events in particular for borehole deployments in quiet environments such as below 1.5-2 km depths. Instrument self-noise levels that are comparable or above background noise levels may not only complicate detection of weak events at larger distances but also challenge methods such as seismic interferometry which aim at analysis of coherent features in ambient noise wavefields to reveal subsurface structure. In this paper, we use power spectral densities to estimate the instrument self-noise for a borehole data set acquired during a hydraulic fracturing stimulation using modified 4.5-Hz geophones. We analyse temporal changes in recorded noise levels and their time-frequency variations for borehole and surface sensors and conclude that instrument noise is a limiting factor in the borehole setting, impeding successful event detection. Next we suggest that the variations of the spectral powers in a time-frequency representation can be used as a new criterion for event detection. Compared to the common short-time average/long-time average method, our suggested approach requires a similar number of parameters but with more flexibility in their choice. It detects small events with anomalous spectral powers with respect to an estimated background noise spectrum with the added advantage that no bandpass filtering is required prior to event detection.

Vaezi, Y.; van der Baan, M.

2014-05-01

383

In this study, the status of boron intake was evaluated and its relation with bone mineral density was examined among free-living\\u000a female subjects in Korea. Boron intake was estimated through the use of the database of boron content in frequently consumed\\u000a foods by Korean people as well as measuring bone mineral density, taking anthropometric measurements, and surveying dietary\\u000a intake of

Mi-Hyun Kim; Yun-Jung Bae; Yoon-Shin Lee; Mi-Kyeong Choi

2008-01-01

384

A method for estimating the cholesterol content of the serum low-density lipoprotein fraction (Sf- 0.20)is presented. The method involves measure- ments of fasting plasma total cholesterol, tri- glyceride, and high-density lipoprotein cholesterol concentrations, none of which requires the use of the preparative ultracentrifuge. Cornparison of this suggested procedure with the more direct procedure, in which the ultracentrifuge is used, yielded

William T. Friedewald; Robert I. Levy; Donald S. Fredrickson

1972-01-01

385

A wavelet-based metric for visual texture discrimination with applications in evolutionary ecology

Much work on natural and sexual selection is concerned with the conspicuousness of visual patterns (textures) on animal and plant surfaces. Previous attempts by evolutionary biologists to quantify apparency of such textures have involved subjective estimates of conspicuousness or statistical analyses based on transect samples. We present a method based on wavelet analysis that avoids subjectivity and that uses more

Jian Fan; Andrew F. Laine

1995-01-01

386

A CFAR algorithm for layover and shadow detection in InSAR images based on kernel density estimation

NASA Astrophysics Data System (ADS)

In this paper, a novel CFAR algorithm for detecting layover and shadow areas in Interferometric synthetic aperture radar (InSAR) images is proposed. Firstly, the probability density function (PDF) of the square root amplitude of InSAR image is estimated by the kernel density estimation. Then, a CFAR algorithm combining with the morphological method for detecting both layover and shadow is presented. Finally, the proposed algorithm is evaluated on a real InSAR image obtained by TerraSAR-X system. The experimental results have validated the effectiveness of the proposed method.

Qin, Xianxiang; Zhou, Shilin; Zou, Huanxin; Ren, Yun

2013-07-01

387

This paper proposes a procedure which evaluates clusters of traffic accident and organizes them according to their significance. The standard kernel density estimation was extended by statistical significance testing of the resulting clusters of the traffic accidents. This allowed us to identify the most important clusters within each section. They represent places where the kernel density function exceeds the significance level corresponding to the 95th percentile level, which is estimated using the Monte Carlo simulations. To show only the most important clusters within a set of sections, we introduced the cluster strength and cluster stability evaluation procedures. The method was applied in the Southern Moravia Region of the Czech Republic. PMID:23567216

Bíl, Michal; Andrášik, Richard; Janoška, Zbyn?k

2013-06-01

388

Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

Trolle, M.; Kery, M.

2003-01-01

389

Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.

2004-01-01

390

Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

2011-05-15

391

Novelty detection by multivariate kernel density estimation and growing neural gas algorithm

NASA Astrophysics Data System (ADS)

One of the underlying assumptions when using data-based methods for pattern recognition in diagnostics or prognostics is that the selected data sample used to train and test the algorithm is representative of the entire dataset and covers all combinations of parameters and conditions, and resulting system states. However in practice, operating and environmental conditions may change, unexpected and previously unanticipated events may occur and corresponding new anomalous patterns develop. Therefore for practical applications, techniques are required to detect novelties in patterns and give confidence to the user on the validity of the performed diagnosis and predictions. In this paper, the application of two types of novelty detection approaches is compared: a statistical approach based on multivariate kernel density estimation and an approach based on a type of unsupervised artificial neural network, called the growing neural gas (GNG). The comparison is performed on a case study in the field of railway turnout systems. Both approaches demonstrate their suitability for detecting novel patterns. Furthermore, GNG proves to be more flexible, especially with respect to dimensionality of the input data and suitability for online learning.

Fink, Olga; Zio, Enrico; Weidmann, Ulrich

2015-01-01

392

It has been shown that the observed temporal distribution of transient events in the cosmos can be used to constrain their rate density. Here we show that the peak flux--observation time relation takes the form of a power law that is invariant to the luminosity distribution of the sources, and that the method can be greatly improved by invoking time reversal invariance and the temporal cosmological principle. We demonstrate how the method can be used to constrain distributions of transient events, by applying it to Swift gamma-ray burst data and show that the peak flux--observation time relation is in good agreement with recent estimates of source parameters. We additionally show that the intrinsic time dependence allows the method to be used as a predictive tool. Within the next year of Swift observation, we find a 50% chance of obtaining a peak flux greater than that of GRB 060017 -- the highest Swift peak flux to date -- and the same probability of detecting a burst with peak flux > 100 photons s^{-1} cm^{-2} within 6 years.

E. Howell; D. Coward; R. Burman; D. Blair

2007-06-28

393

One of the most significant features of diagnostic echocardiographic images is to reduce speckle noise and make better image quality. In this paper we proposed a simple and effective filter design for image denoising and contrast enhancement based on multiscale wavelet denoising method. Wavelet threshold algorithms replace wavelet coefficients with small magnitude by zero and keep or shrink the other coefficients. This is basically a local procedure, since wavelet coefficients characterize the local regularity of a function. After we estimate distribution of noise within echocardiographic image, then apply to fitness Wavelet threshold algorithm. A common way of the estimating the speckle noise level in coherent imaging is to calculate the mean-to-standard-deviation ratio of the pixel intensity, often termed the Equivalent Number of Looks(ENL), over a uniform image area. Unfortunately, we found this measure not very robust mainly because of the difficulty to identify a uniform area in a real image. For this reason, we will only use here the S/MSE ratio and which corresponds to the standard SNR in case of additivie noise. We have simulated some echocardiographic images by specialized hardware for real-time application;processing of a 512*512 images takes about 1 min. Our experiments show that the optimal threshold level depends on the spectral content of the image. High spectral content tends to over-estimate the noise standard deviation estimation performed at the finest level of the DWT. As a result, a lower threshold parameter is required to get the optimal S/MSE. The standard WCS theory predicts a threshold that depends on the number of signal samples only. PMID:11604864

Kang, S C; Hong, S H

2001-01-01

394

sEMG wavelet-based indices predicts muscle power loss during dynamic contractions

The purpose of this study was to investigate the sensitivity of new surface electromyography (sEMG) indices based on the discrete wavelet transform to estimate acute exercise-induced changes on muscle power output during a dynamic fatiguing protocol. Fifteen trained subjects performed five sets consisting of 10 leg press, with 2min rest between sets. sEMG was recorded from vastus medialis (VM) muscle.

M. González-Izal; I. Rodríguez-Carreño; A. Malanda; F. Mallor-Giménez; I. Navarro-Amézqueta; E. M. Gorostiaga; M. Izquierdo

2010-01-01

395

A wavelet-based statistical analysis of fMRI data

We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet\\u000a transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance\\u000a imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals\\u000a and spatial locations. An anatomical subvolume

Ivo D. Dinov; John W. Boscardin; Michael S. Mega; Elizabeth L. Sowell; Arthur W. Toga

2005-01-01

396

A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in (137)Cs and (90)Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann-Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements. PMID:23524202

Paul, Sabyasachi; Suman, V; Sarkar, P K; Ranade, A K; Pulhani, V; Dafauti, S; Datta, D

2013-08-01

397

NASA Technical Reports Server (NTRS)

The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

2001-01-01

398

In order to observe the fine details of biomedical specimens, various kinds of high-magnification microscopes are used. However, they suffer from a limited field of view when visualizing highly magnified specimens. Image mosaicing techniques are necessary to integrate two or more partially overlapping images into one and make the whole specimen visible. In this study, we propose a new system that automatically creates panoramic images by mosaicing all the microscopic images acquired from a specimen. Not only does it effectively compensate for the congenital narrowness in microscopic views, but it also results in the mosaiced image containing as little distortion with respect to the originals as possible. The system consists of four main steps: (1) feature point extraction using multiscale wavelet analysis, (2) image matching based on feature points or by projection profile alignment, (3) colour difference adjustment and optical degradation compensation with a Gaussian-like model and (4) wavelet-based image blending. In addition to providing a precise alignment, the proposed system also takes into account the colour deviations and degradation in image mosaicing. The visible seam lines are eliminated after image blending. The experimental results show that the system performs well on differently stained image sequences and is effective on acquired images with large colour variations and degradation. It is expected to be a practical tool for microscopic image mosaicing. PMID:18754995

Hsu, W-Y; Poon, W-F Paul; Sun, Y-N

2008-09-01

399

through sophisticated flight dynamics [1]. However, for the Air Traffic Control Sys- tem Command Center at an Air Route Traffic Control Center (simply denoted as Center hereafter) level [2]. It forecasts aircraftAn Air Traffic Prediction Model based on Kernel Density Estimation Yi Cao,1 Lingsong Zhang,2

Sun, Dengfeng

400

JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...

401

In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects

Daniel Senkowski; Dave Saint-Amour; Simon P. Kelly; John J. Foxe

2007-01-01

402

We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of ?1.625?dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

Adib, Mani; Cretu, Edmond

2013-01-01

403

Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets. PMID:23797315

Wang, Shitong; Wang, Jun; Chung, Fu-Lai

2013-06-18

404

Estimation of tool pose based on force-density correlation during robotic drilling.

The application of image-guided systems with or without support by surgical robots relies on the accuracy of the navigation process, including patient-to-image registration. The surgeon must carry out the procedure based on the information provided by the navigation system, usually without being able to verify its correctness beyond visual inspection. Misleading surrogate parameters such as the fiducial registration error are often used to describe the success of the registration process, while a lack of methods describing the effects of navigation errors, such as those caused by tracking or calibration, may prevent the application of image guidance in certain accuracy-critical interventions. During minimally invasive mastoidectomy for cochlear implantation, a direct tunnel is drilled from the outside of the mastoid to a target on the cochlea based on registration using landmarks solely on the surface of the skull. Using this methodology, it is impossible to detect if the drill is advancing in the correct direction and that injury of the facial nerve will be avoided. To overcome this problem, a tool localization method based on drilling process information is proposed. The algorithm estimates the pose of a robot-guided surgical tool during a drilling task based on the correlation of the observed axial drilling force and the heterogeneous bone density in the mastoid extracted from 3-D image data. We present here one possible implementation of this method tested on ten tunnels drilled into three human cadaver specimens where an average tool localization accuracy of 0.29 mm was observed. PMID:23269744

Williamson, Tom M; Bell, Brett J; Gerber, Nicolas; Salas, Lilibeth; Zysset, Philippe; Caversaccio, Marco; Weber, Stefan

2013-04-01

405

A multiscale wavelet-based test for isotropy of random fields on a regular lattice.

A test for isotropy of images modeled as stationary or intrinsically stationary random fields on a lattice is developed. The test is based on the wavelet theory, and can operate on the horizontal and vertical scale of choice, or on any combination of scales. Scale is introduced through the wavelet variances (sometimes called as the wavelet power spectrum), which decompose the variance over different horizontal and vertical spatial scales. The method is more general than existing tests for isotropy, since it handles intrinsically stationary random fields as well as second-order stationary fields. The performance of the method is demonstrated on samples from different random fields, and compared with three existing methods. It is competitive with or outperforms existing methods since it consistently rejects close to the nominal level for isotropic fields while having a rejection rate for anisotropic fields comparable with the existing methods in the stationary case, and superior in the intrinsic case. As practical examples, paper density images of handsheets and mammogram images are analyzed. PMID:25561593

Thon, Kevin; Geilhufe, Marc; Percival, Donald B

2015-02-01

406

Delayed density-dependent mortality can be a cause of the cyclic patterns in abundance observed in many populations of sockeye salmon (Oncorhynchus nerka). We used a meta-analytical approach to test for delayed density dependence using 34 time series of sockeye data. We found no consistent evidence for delayed density-dependent mortality using spawner - spring fry or spawner-recruit data. We did find

Ransom A. Myers; Michael J. Bradford; Jessica M. Bridson; Gordon Mertz

1997-01-01

407

Wavelet-based statistical approach for speckle reduction in medical ultrasound images.

A novel speckle-reduction method is introduced, based on soft thresholding of the wavelet coefficients of a logarithmically transformed medical ultrasound image. The method is based on the generalised Gaussian distributed (GGD) modelling of sub-band coefficients. The method used was a variant of the recently published BayesShrink method by Chang and Vetterli, derived in the Bayesian framework for denoising natural images. It was scale adaptive, because the parameters required for estimating the threshold depend on scale and sub-band data. The threshold was computed by Ksigma2/sigma(x), where sigma and sigma(x) were the standard deviation of the noise and the sub-band data of the noise-free image, respectively, and K was a scale parameter. Experimental results showed that the proposed method outperformed the median filter and the homomorphic Wiener filter by 29% in terms of the coefficient of correlation and 4% in terms of the edge preservation parameter. The numerical values of these quantitative parameters indicated the good feature preservation performance of the algorithm, as desired for better diagnosis in medical image processing. PMID:15125148

Gupta, S; Chauhan, R C; Sexana, S C

2004-03-01

408

NASA Astrophysics Data System (ADS)

We propose a methodology to estimate the density of frozen media (snow, firn and ice) using common offset (CO) GPR data. The technique is based on reflection amplitude analysis to calculate the series of reflection coefficients used to estimate the dielectric permittivity of each layer. We determine the vertical density variations for all the GPR traces by applying an empirical equation. We are thus able to infer the nature of frozen materials, from fresh snow to firn and ice. The proposed technique is critically evaluated and validated on synthetic data and further tested on real data of the Glacier of Mt. Canin (South-Eastern Alps). Despite the simplifying hypotheses and the necessary approximations, the average values of density for different levels are calculated with acceptable accuracy. The resulting large-scale density data are fundamental to estimate the water equivalent (WE), which is an essential parameter to determine the actual water mass within a certain frozen volume. Moreover, this analysis can help to find and locate debris or moraines embedded within the ice bodies.

Forte, E.; Dossi, M.; Colucci, R. R.; Pipan, M.

2013-12-01

409

Wavelet-based automatic determination of the P- and S-wave arrivals

NASA Astrophysics Data System (ADS)

The detection of P- and S-wave arrivals is important for a variety of seismological applications including earthquake detection and characterization, and seismic tomography problems such as imaging of hydrocarbon reservoirs. For many years, dedicated human-analysts manually selected the arrival times of P and S waves. However, with the rapid expansion of seismic instrumentation, automatic techniques that can process a large number of seismic traces are becoming essential in tomographic applications, and for earthquake early-warning systems. In this work, we present a pair of algorithms for efficient picking of P and S onset times. The algorithms are based on the continuous wavelet transform of the seismic waveform that allows examination of a signal in both time and frequency domains. Unlike Fourier transform, the basis functions are localized in time and frequency, therefore, wavelet decomposition is suitable for analysis of non-stationary signals. For detecting the P-wave arrival, the wavelet coefficients are calculated using the vertical component of the seismogram, and the onset time of the wave is identified. In the case of the S-wave arrival, we take advantage of the polarization of the shear waves, and cross-examine the wavelet coefficients from the two horizontal components. In addition to the onset times, the automatic picking program provides estimates of uncertainty, which are important for subsequent applications. The algorithms are tested with synthetic data that are generated to include sudden changes in amplitude, frequency, and phase. The performance of the wavelet approach is further evaluated using real data by comparing the automatic picks with manual picks. Our results suggest that the proposed algorithms provide robust measurements that are comparable to manual picks for both P- and S-wave arrivals.

Bogiatzis, P.; Ishii, M.

2013-12-01

410

NASA Technical Reports Server (NTRS)

The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

Sjoegreen, B.; Yee, H. C.

2001-01-01

411

NASA Astrophysics Data System (ADS)

Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

2012-05-01

412

NASA Technical Reports Server (NTRS)

To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.

Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

1995-01-01

413

Parenchymal breast density estimation with the use of statistical characteristics and textons

Breast parenchymal density has been found to be a strong indicator for breast cancer risk but the process of classification by the radiologists has been proved to be quite subjective. Furthermore, recent studies have shown that the effectiveness of most modern CAD systems used for breast abnormality detection diminishes greatly when parenchymal breast tissue density is high. Therefore the existence

Sevastianos Chatzistergos; John Stoitsis; A. Papaevangelou; G. Zografos; K. S. Nikita

2010-01-01

414

Interference by pigment in the estimation of microalgal biomass concentration by optical density

Optical density is used as a convenient indirect measurement of biomass concentration in microbial cell suspensions. Absorbance of light by a suspension can be related directly to cell density using a suitable standard curve. However, inaccuracies can be introduced when the pigment content of the cells changes. Under the culture conditions used, pigment content of the microalga Chlorella vulgaris varied

Melinda J. Griffiths; Clive Garcin; Robert P. van Hille; Susan T. L. Harrison

2011-01-01

415

Estimation of velocity spectra in variable density jets using laser–Doppler anemometry

The present paper is concerned with turbulent jet flows in which the density varies due to exchanges of mass. Such flows have gained some attention over the last few years but, to our knowledge, this is the first time results are reported for the influence of density variations on velocity spectra. Our measurements are obtained by laser–Doppler anemometry and part

L Pietri; A Gharbi; M Amielh; F Anselmet

1998-01-01

416

Admittance estimates of mean crustal thickness and density at the Martian hemispheric dichotomy

the minimum value and a fixed surface density of 2.5 Mg mÃ?3 , the ranges of tc, Te, and F are 1Â75 km, 37 by a single-layered crust of uniform density, the results are at odds with expectations of the nature

Nimmo, Francis

417

A field comparison of nested grid and trapping web density estimators

The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.

Jett, D.A.; Nichols, J.D.

1987-01-01

418

NASA Astrophysics Data System (ADS)

Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

2011-12-01

419

Power spectral density estimation by spline smoothing in the frequency domain

NASA Technical Reports Server (NTRS)

An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.

Defigueiredo, R. J. P.; Thompson, J. R.

1972-01-01

420

Power spectral density estimation by spline smoothing in the frequency domain.

NASA Technical Reports Server (NTRS)

An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-

De Figueiredo, R. J. P.; Thompson, J. R.

1972-01-01

421

Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation. PMID:19775273

Jaffé, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Rúa, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A

2010-04-01

422

This paper studies the time-dependent power spectral density (PSD) estimation of nonstationary surface electromyography (SEMG) signals and its application to fatigue analysis during isometric muscle contraction. The conventional time-dependent PSD estimation methods exhibit large variabilities in estimating the instantaneous SEMG parameters so that they often fail to identify the changing patterns of short-period SEMG signals and gauge the extent of fatigue in specific muscle groups. To address this problem, a time-varying autoregressive (TVAR) model is proposed in this paper to describe the SEMG signal, and then the recursive least-squares (RLS) and basis function expansion (BFE) methods are used to estimate the model coefficients and the time-dependent PSD. The instantaneous parameters extracted from the PSD estimation are evaluated and compared in terms of reliability, accuracy, and complexity. Experimental results on synthesized and real SEMG data show that the proposed TVAR-model-based PSD estimators can achieve more stable and precise instantaneous parameter estimation than conventional methods. PMID:19027325

Zhang, Z G; Liu, H T; Chan, S C; Luk, K D K; Hu, Y

2010-02-01

423

Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range

NASA Technical Reports Server (NTRS)

The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.

Krupenio, N. N.

1977-01-01

424

Estimation of bone mineral density in children with juvenile rheumatoid arthritis.

Bone mineral content of different areas of the skeleton was measured by dual photon absorptiometry in 20 children with juvenile rheumatoid arthritis (JRA) and compared to 20 age and sex matched healthy children. Spinal density was similar in both groups in prepubertal children but decreased in the postpubertal girls with JRA. Total bone density was also decreased in the postpubertal girls. Six children with JRA had repeat scans 12 to 24 months later; in 3 children total bone mineral content increased significantly with an intensive management program. Our study suggests that bone mineral density does not show a pubertal increase in children with JRA, as it does in healthy children. PMID:1941831

Hopp, R; Degan, J; Gallagher, J C; Cassidy, J T

1991-08-01

425

Superfluid density in the two-dimensional attractive Hubbard model: Quantitative estimates

NASA Astrophysics Data System (ADS)

A nonzero superfluid density is equivalent to the occurrence of a Meissner effect and therefore signals superconductivity. A recent theorem shows that in the case of a spectrum with a gap the superfluid density is equivalent to the Drude weight. This theorem is employed to compare approximate calculations of the superfluid density in the two-dimensional attractive Hubbard model using the Hartree-Fock approximation with exact diagonalization calculations of the Drude weight. Direct comparison of the approximate results with recent finite-temperature quantum Monte Carlo calculations is also made. The approximate results are found to be quantitatively accurate for all fillings, except close to half-filling.

Denteneer, P. J. H.

1994-03-01

426

Population Indices Versus Correlated Density Estimates of Black-Footed Ferret Abundance

Estimating abundance of carnivore populations is problematic because individuals typically are elusive, nocturnal, and dispersed across the landscape. Rare or endangered carnivore populations are even more difficult to estimate because of small sample sizes. Considering behavioral ecology of the target species can drastically improve survey efficiency and effectiveness. Previously, abundance of the black-footed ferret (Mustela nigripes) was monitored by spotlighting

Martin B. Grenier; Steven W. Buskirk; Richard Anderson-Sprecher

2009-01-01

427

In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

Zhang Yumin; Lum, Kai-Yew [Temasek Laboratories, National University of Singapore, Singapore 117508 (Singapore); Wang Qingguo [Depa. Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (Singapore)

2009-03-05

428

Comparison of precision orbit derived density estimates for CHAMP and GRACE satellites

NASA Astrophysics Data System (ADS)

Current atmospheric density mo