While these samples are representative of the content of Science.gov,

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of Science.gov

to obtain the most current and comprehensive results.

Last update: August 15, 2014.

1

Wavelet-based density estimation for noise reduction in plasma simulations using particles

For given computational resources, one of the main limitations in the accuracy of plasma simulations using particles comes from the noise due to limited statistical sampling in the reconstruction of the particle distribution function. A method based on wavelet multiresolution analysis is proposed and tested to reduce this noise. The method, known as wavelet based density estimation (WBDE), was previously introduced in the statistical literature to estimate probability densities given a nite number of independent measurements. Its novel application to plasma simulations can be viewed as a natural extension of the nite size particles (FSP) approach, with the advantage of estimating more accurately distribution functions that have localized sharp features. The proposed method preserves the moments of the particle distribution function to a good level of accuracy, has no constraints on the dimensionality of the system, does not require an a priori selection of a global smoothing scale, and its able to adapt locally to the smoothness of the density based on the given discrete particle data. Most importantly, the computational cost of the denoising stage is of the same order as one timestep of a FSP simulation. The method is compared with a recently proposed proper orthogonal decomposition based method, and it is tested with particle data corresponding to strongly collisional, weakly collisional, and collisionless plasmas simulations.

Nguyen van yen, Romain [Laboratoire de Meteorologie Dynamique-CNRL, Ecole Normale Superieure; Del-Castillo-Negrete, Diego B [ORNL; Schneider, Kai [Universite d'Aix-Marseille; Farge, Marie [Laboratoire de Meteorologie Dynamique-CNRL, Ecole Normale Superieure; Chen, Guangye [ORNL

2010-01-01

2

A non linear wavelet based estimator for long memory processes

Two wavelet based estimators are considered in this paper for the two parameters that characterize long range dependence processes. The first one is linear and is based on the statistical properties of the coefficients of a discrete wavelet transform of long range dependence processes. The estimator consists in measuring the slope (related to the long memory parameter) and the intercept

Livia De Giovanni; Maurizio Naldi

2004-01-01

3

Optimal a priori clipping estimation for wavelet-based method of moments matrices

Wavelet bases are mainly used in the methods of moments (MoM) to render the system matrix sparse; clipping entries below a given threshold is an essential operation to obtain the desired sparse matrix. In this paper, we present a novel a priori way to estimate the clipping threshold that can be used if one wants an error on the solution

Francesco P. Andriulli; Giuseppe Vecchi; Francesca Vipiana; Paola Pirinoli; Anita Tabacco

2005-01-01

4

Wavelet-based Poisson rate estimation using the Skellam distribution

NASA Astrophysics Data System (ADS)

Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements often exhibit heteroscedastic behavior. In particular, time series components and other measurements may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation method performed in certain wavelet/filterbank transform domains. This indicates a promising potential approach for denoising of Poisson counts in the above-mentioned applications.

Hirakawa, Keigo; Baqai, Farhan; Wolfe, Patrick J.

2009-02-01

5

Estimation of Modal Parameters Using a Wavelet-Based Approach

NASA Technical Reports Server (NTRS)

Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

Lind, Rick; Brenner, Marty; Haley, Sidney M.

1997-01-01

6

WAVELET-BASED BAYESIAN ESTIMATION OF PARTIALLY LINEAR REGRESSION MODELSWITH LONG MEMORY ERRORS

In this paper we focus on partially linear regression models with long memory errors, and propose a wavelet-based Bayesian procedure that allows the simultaneous estimation of the model parameters and the nonparametric part of the model. Employing discrete wavelet transforms is crucial in order to simplify the dense variance-covariance matrix of the long memory error. We achieve a fully Bayesian inference by adopting a Metropolis algorithm within a Gibbs sampler. We evaluate the performances of the proposed method on simulated data. In addition, we present an application to Northern hemisphere temperature data, a benchmark in the long memory literature.

Ko, Kyungduk; Qu, Leming; Vannucci, Marina

2013-01-01

7

The wavelet-based multi-resolution motion estimation using temporal aliasing detection

NASA Astrophysics Data System (ADS)

In this paper, we propose a new algorithm for wavelet-based multi-resolution motion estimation (MRME) using temporal aliasing detection (TAD). In wavelet transformed image/video signals, temporal aliasing will be severe as the motion of object increases, causing the performance of the conventional MRME algorithms to drop. To overcome this problem, we perform the temporal aliasing detection and MRME simultaneously instead of using a temporal anti-aliasing filter which changes the original signal. We show that this technique gives competitive or better performance in terms of rate-distortion (RD) for slow-varying or simple-moving video signals compared to conventional MRME employing increased search area (SA).

Lee, Teahyung; Anderson, David V.

2007-01-01

8

NASA Astrophysics Data System (ADS)

We propose to use the Bayesian framework and the wavelet transform (WT) to estimate differential photometry in binary systems imaged with adaptive optics (AO). We challenge the notion that Richardson-Lucy-type algorithms are not suitable to AO observations because of the mismatch between the target's and reference star's point-spread functions (PSFs). Using real data obtained with the Lick Observatory AO system on the 3 m Shane telescope, we first obtain a deconvolved image by means of the Adaptive Wavelets Maximum Likelihood Estimator (AWMLE) approach. The algorithm reconstructs an image that maximizes the compound Poisson and Gaussian likelihood of the data. It also performs wavelet decomposition, which helps to distinguish signal from noise, and therefore it aides the stopping rule. We test photometric precision of that approach versus PSF-fitting with the StarFinder package for companions located within the halo created by the bright star. Simultaneously, we test the susceptibility of both approaches to error in the reference PSF, as quantified by the difference in the Strehl ratio between the science and calibration PSFs. We show that AWMLE is capable of producing better results than PSF-fitting. More importantly, we have developed a methodology for testing photometric codes for AO observations.

Baena Gallé, Roberto; Gladysz, Szymon

2011-07-01

9

Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance

We present a statistical view of the texture retrieval problem by combining the two related tasks, namely feature extraction (FE) and similarity measurement (SM), into a joint modeling and classification scheme. We show that using a consistent estimator of texture model parameters for the FE step followed by computing the Kullback-Leibler distance (KLD) between estimated models for the SM step

Minh N. Do; Martin Vetterli

2002-01-01

10

Fetal QRS detection and heart rate estimation: a wavelet-based approach.

Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. PMID:25070210

Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

2014-08-01

11

Real-time wavelet based blur estimation on cell BE platform

NASA Astrophysics Data System (ADS)

We propose a real-time system for blur estimation using wavelet decomposition. The system is based on an emerging multi-core microprocessor architecture (Cell Broadband Engine, Cell BE) known to outperform any available general purpose or DSP processor in the domain of real-time advanced video processing solutions. We start from a recent wavelet domain blur estimation algorithm which uses histograms of a local regularity measure called average cone ratio (ACR). This approach has shown a very good potential for assessing the level of blur in the image yet some important aspects remain to be addressed in order for the method to become a practically working one. Some of these aspects are explored in our work. Furthermore, we develop an efficient real-time implementation of the novelty metric and integrate it into a system that captures live video. The proposed system estimates blur extent and renders the results to the remote user in real-time.

Lukic, Nemanja; Platiša, Ljiljana; Pižurica, Aleksandra; Philips, Wilfried; Temerinac, Miodrag

2010-02-01

12

Estimation of interferogram aberration coefficients using wavelet bases and Zernike polynomials

This paper combines the use of wavelet decompositions and Zernike polynomial approximations to extract aberration coefficients associated to an interferogram. Zernike polynomials are well known to represent aberration components of a wave-front. Polynomial approximation properties on a discrete mesh after an orthogonalization process via Gram-Schmidt decompositions are very useful to straightforward estimate aberration coefficients. It is shown that decomposition of

Alfredo Elias-Juarez; Noe Razo-Razo; Miguel Torres-Cisneros

2001-01-01

13

Deconvolving kernel density estimators

This paper considers estimation of a continuous bounded probability density when observations from the density are contaminated by additive measurement errors having a known distribution. Properties of the estimator obtained by deconvolving a kernel estimator of the observed data are investigated. When the kernel used is sufficiently smooth the deconvolved estimator is shown to be pointwise consistent and bounds on

Leonabd A Stefanski; Raymond J Casb-oul

1990-01-01

14

Minimum complexity density estimation

The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum complexity estimator converges to

Andrew R. Barron; Thomas M. Cover

1991-01-01

15

Contingent kernel density estimation.

Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method. PMID:22383966

Fortmann-Roe, Scott; Starfield, Richard; Getz, Wayne M

2012-01-01

16

Airborne Crowd Density Estimation

NASA Astrophysics Data System (ADS)

This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

Meynberg, O.; Kuschk, G.

2013-10-01

17

We consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise. The noise is represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time, and another that has a power spectral density varying as 1/f{sup g}amma. We present an accurate and fast [O(N)] algorithm for parameter estimation based on computing the likelihood in a wavelet basis. The method is illustrated and tested using simulated time-series photometry of exoplanetary transits, with particular attention to estimating the mid-transit time. We compare our method to two other methods that have been used in the literature, the time-averaging method and the residual-permutation method. For noise processes that obey our assumptions, the algorithm presented here gives more accurate results for mid-transit times and truer estimates of their uncertainties.

Carter, Joshua A.; Winn, Joshua N., E-mail: carterja@mit.ed, E-mail: jwinn@mit.ed [Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

2009-10-10

18

Conditional Density Estimation with Class Probability Estimators

NASA Astrophysics Data System (ADS)

Many regression schemes deliver a point estimate only, but often it is useful or even essential to quantify the uncertainty inherent in a prediction. If a conditional density estimate is available, then prediction intervals can be derived from it. In this paper we compare three techniques for computing conditional density estimates using a class probability estimator, where this estimator is applied to the discretized target variable and used to derive instance weights for an underlying univariate density estimator; this yields a conditional density estimate. The three density estimators we compare are: a histogram estimator that has been used previously in this context, a normal density estimator, and a kernel estimator. In our experiments, the latter two deliver better performance, both in terms of cross-validated log-likelihood and in terms of quality of the resulting prediction intervals. The empirical coverage of the intervals is close to the desired confidence level in most cases. We also include results for point estimation, as well as a comparison to Gaussian process regression and nonparametric quantile estimation.

Frank, Eibe; Bouckaert, Remco R.

19

NASA Technical Reports Server (NTRS)

Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

Jameson, Leland

1996-01-01

20

Histogram Estimators of Bivariate Densities.

National Technical Information Service (NTIS)

One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable- interval estimators which are constructed from intervals whose lengths are determined by the criterion of integ...

J. A. Husemann

1986-01-01

21

Wavelet-based functional mixed models

Summary Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects’ wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks.

Morris, Jeffrey S.; Carroll, Raymond J.

2009-01-01

22

Density Estimation with Mercer Kernels

NASA Technical Reports Server (NTRS)

We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

Macready, William G.

2003-01-01

23

GOLDEN RATIO-HAAR WAVELET BASED STEGANOGRAPHY

In this paper, we have presented the golden ratio-Haar wavelet based multimedia steganography. The key features of the proposed method are: 1. New Haar wavelet structure based on the Fibonacci se- quence, and Golden Ratio. 2. Parametric transform dependency, as decryption key, on the security of the sensitive data. One of the important differences between the existing trans- form based

Sos S. Agaian; Okan Caglayan; Juan Pablo Perez; Hakob Sarukhanyan; Jaakko Astola

24

Estimation of coastal density gradients

NASA Astrophysics Data System (ADS)

Density gradients in coastal regions with significant freshwater input are large and variable and are a major control of nearshore circulation. However their measurement is difficult, especially where the gradients are largest close to the coast, with significant uncertainties because of a variety of factors - spatial and time scales are small, tidal currents are strong and water depths shallow. Whilst temperature measurements are relatively straightforward, measurements of salinity (the dominant control of spatial variability) can be less reliable in turbid coastal waters. Liverpool Bay has strong tidal mixing and receives fresh water principally from the Dee, Mersey, Ribble and Conwy estuaries, each with different catchment influences. Horizontal and vertical density gradients are variable both in space and time. The water column stratifies intermittently. A Coastal Observatory has been operational since 2002 with regular (quasi monthly) CTD surveys on a 9 km grid, an situ station, an instrumented ferry travelling between Birkenhead and Dublin and a shore-based HF radar system measuring surface currents and waves. These measurements are complementary, each having different space-time characteristics. For coastal gradients the ferry is particularly useful since measurements are made right from the mouth of Mersey. From measurements at the in situ site alone density gradients can only be estimated from the tidal excursion. A suite of coupled physical, wave and ecological models are run in association with these measurements. The models, here on a 1.8 km grid, enable detailed estimation of nearshore density gradients, provided appropriate river run-off data are available. Examples are presented of the density gradients estimated from the different measurements and models, together with accuracies and uncertainties, showing that systematic time series measurements within a few kilometres of the coast are a high priority. (Here gliders are an exciting prospect for detailed regular measurements to fill this gap.) The consequences for and sensitivity of circulation estimates are presented using both numerical and analytic models.

Howarth, M. J.; Palmer, M. R.; Polton, J. A.; O'Neill, C. K.

2012-04-01

25

Density estimation in wildlife surveys

Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

Bart, J.; Droege, S.; Geissler, P.; Peterjohn, B.; Ralph, C. J.

2004-01-01

26

Wavelet based fingerprint image enhancement

Fingerprint image enhancement is aimed at improving the quality of local features for automatic fingerprint identification. It will allow accurate feature extraction and identification. In this paper, we have considered the use of wavelets for fingerprint enhancement mainly due to their spatial localization property as well as capability to use oriented wavelets such as Gabor wavelets for orientation flow estimation.

Safar Hatami; Reshad Hosseini; Mahmoud Kamarei; Hossein Ahmadi-Noubari

2005-01-01

27

Quantum statistical inference for density estimation.

National Technical Information Service (NTIS)

A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inver...

R. N. Silver H. F. Martz T. Wallstrom

1993-01-01

28

Wavelet-based digital image watermarking.

A wavelet-based watermark casting scheme and a blind watermark retrieval technique are investigated in this research. An adaptive watermark casting method is developed to first determine significant wavelet subbands and then select a couple of significant wavelet coefficients in these subbands to embed watermarks. A blind watermark retrieval technique that can detect the embedded watermark without the help from the original image is proposed. Experimental results show that the embedded watermark is robust against various signal processing and compression attacks. PMID:19384400

Wang, H J; Su, P C; Kuo, C C

1998-12-01

29

Density Estimation from an Individual Numerical Sequence

This paper considers estimation of a univariate density from an individual numerical sequence. It is assumed that (i) the limiting relative frequencies of the numerical se- quence are governed by an unknown density, and (ii) there is a known upper bound for the variation of the density on an increasing sequence of intervals. A simple estimation scheme is proposed, and

Andrew B. Nobel; Gusztáv Morvai; Sanjeev R. Kulkarni

1998-01-01

30

Theoretical Analysis of Density Ratio Estimation

NASA Astrophysics Data System (ADS)

Density ratio estimation has gathered a great deal of attention recently since it can be used for various data processing tasks. In this paper, we consider three methods of density ratio estimation: (A) the numerator and denominator densities are separately estimated and then the ratio of the estimated densities is computed, (B) a logistic regression classifier discriminating denominator samples from numerator samples is learned and then the ratio of the posterior probabilities is computed, and (C) the density ratio function is directly modeled and learned by minimizing the empirical Kullback-Leibler divergence. We first prove that when the numerator and denominator densities are known to be members of the exponential family, (A) is better than (B) and (B) is better than (C). Then we show that once the model assumption is violated, (C) is better than (A) and (B). Thus in practical situations where no exact model is available, (C) would be the most promising approach to density ratio estimation.

Kanamori, Takafumi; Suzuki, Taiji; Sugiyama, Masashi

31

\\u000a In this paper, we present comparative analysis of scale-invariant feature extraction using different wavelet bases. The main\\u000a advantage of the wavelet transform is the multi-resolution analysis. Furthermore, wavelets enable localigation in both space\\u000a and frequency domains and high-frequency salient feature detection. Wavelet transforms can use various basis functions. This\\u000a research aims at comparative analysis of Daubechies, Haar and Gabor wavelets

Joohyun Lim; Youngouk Kim; Joonki Paik

2009-01-01

32

Wavelet-based enhancement of remote sensing and biomedical image series using an auxiliary image

In this paper, a wavelet-based enhancement method for multicomponent images or image series is proposed. The method applies Bayesian estimation, including the use of a high-resolution noise-free grey scale image as prior information. The resulting estimator statistically exploits the correlation between the image series and the high-resolution noise-free image to enhance (i.e. to improve the signal to noise ratio and

P. Scheunders; S. De Backer

2005-01-01

33

Analytical form for a Bayesian wavelet estimator of images using the Bessel K form densities.

A novel Bayesian nonparametric estimator in the Wavelet domain is presented. In this approach, a prior model is imposed on the wavelet coefficients designed to capture the sparseness of the wavelet expansion. Seeking probability models for the marginal densities of the wavelet coefficients, the new family of Bessel K forms (BKF) densities are shown to fit very well to the observed histograms. Exploiting this prior, we designed a Bayesian nonlinear denoiser and we derived a closed form for its expression. We then compared it to other priors that have been introduced in the literature, such as the generalized Gaussian density (GGD) or the alpha-stable models, where no analytical form is available for the corresponding Bayesian denoisers. Specifically, the BKF model turns out to be a good compromise between these two extreme cases (hyperbolic tails for the alpha-stable and exponential tails for the GGD). Moreover, we demonstrate a high degree of match between observed and estimated prior densities using the BKF model. Finally, a comparative study is carried out to show the effectiveness of our denoiser which clearly outperforms the classical shrinkage or thresholding wavelet-based techniques. PMID:15700528

Fadili, Jalal M; Boubchir, Larbi

2005-02-01

34

Investigation of estimators of probability density functions

NASA Technical Reports Server (NTRS)

Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

Speed, F. M.

1972-01-01

35

Bayesian Density Estimation and Inference Using Mixtures

We describe and illustrate Bayesian inference in models for density estimation using mixturesof Dirichlet processes. These models provide natural settings for density estimation,and are exemplified by special cases where data are modelled as a sample from mixtures ofnormal distributions. Efficient simulation methods are used to approximate various prior,posterior and predictive distributions. This allows for direct inference on a variety of

Michael D. Escobar; Mike West

1994-01-01

36

Wavelet-Based SAR Image Despeckling and Information Extraction, Using Particle Filter

This paper proposes a new-wavelet-based synthetic aperture radar (SAR) image despeckling algorithm using the sequential Monte Carlo method. A model-based Bayesian approach is proposed. This paper presents two methods for SAR image despeckling. The first method, called WGGPF, models a prior with Generalized Gaussian (GG) probability density function (pdf) and the second method, called WGMPF, models prior with a Generalized

Dusan Gleich; Mihai Datcu

2009-01-01

37

Density estimation by maximum quantum entropy.

National Technical Information Service (NTIS)

A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The in...

R. N. Silver T. Wallstrom H. F. Martz

1993-01-01

38

Wavelet-based approach to character skeleton.

Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform. PMID:17491454

You, Xinge; Tang, Yuan Yan

2007-05-01

39

Optimization of k nearest neighbor density estimates

Nonparametric density estimation using thek-nearest-neighbor approach is discussed. By developing a relation between the volume and the coverage of a region, a functional form for the optimumkin terms of the sample size, the dimensionality of the observation space, and the underlying probability distribution is obtained. Within the class of density functions that can be made circularly symmetric by a linear

K. Fukunaga; L. Hostetler

1973-01-01

40

Application of wavelet-based denoising techniques to remote sensing very low frequency signals

NASA Astrophysics Data System (ADS)

In this paper, we apply wavelet-based denoising techniques to experimental remote sensing very low frequency (VLF) signals obtained from the Holographic Array for Ionospheric/Lightning research system and the Elazig VLF receiver system. The wavelet-based denoising techniques are tested by soft, hard, hyperbolic and nonnegative garrote wavelet thresholding with the threshold selection rule based on Stein's unbiased estimate of risk, the fixed form threshold, the mixed threshold selection rule and the minimax-performance threshold selection rule. The aim of this study is to find out the direct (early/fast) and indirect (lightning-induced electron precipitation) effects of lightning in noisy VLF transmitter signals without discomposing the nature of signal. The appropriate results are obtained by fixed form threshold selection rule with soft thresholding using Symlet wavelet family.

Güzel, Esat; Cany?Lmaz, Murat; Türk, Mustafa

2011-04-01

41

ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

42

Wavelet-based adaptive denoising and baseline correction for MALDI TOF MS.

Proteomic profiling by MALDI TOF mass spectrometry (MS) is an effective method for identifying biomarkers from human serum/plasma, but the process is complicated by the presence of noise in the spectra. In MALDI TOF MS, the major noise source is chemical noise, which is defined as the interference from matrix material and its clusters. Because chemical noise is nonstationary and nonwhite, wavelet-based denoising is more effective than conventional noise reduction schemes based on Fourier analysis. However, current wavelet-based denoising methods for mass spectrometry do not fully consider the characteristics of chemical noise. In this article, we propose new wavelet-based high-frequency noise reduction and baseline correction methods that were designed based on the discrete stationary wavelet transform. The high-frequency noise reduction algorithm adaptively estimates the time-varying threshold for each frequency subband from multiple realizations of chemical noise and removes noise from mass spectra of samples using the estimated thresholds. The baseline correction algorithm computes the monotonically decreasing baseline in the highest approximation of the wavelet domain. The experimental results demonstrate that our algorithms effectively remove artifacts in mass spectra that are due to chemical noise while preserving informative features as compared to commonly used denoising methods. PMID:20455751

Shin, Hyunjin; Sampat, Mehul P; Koomen, John M; Markey, Mia K

2010-06-01

43

A wavelet based investigation of long memory in stock returns

NASA Astrophysics Data System (ADS)

Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.

Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.

2012-04-01

44

Quantum statistical inference for density estimation

A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inverse problems; the penalty function is a convex information divergence enforcing global smoothing toward default models, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing may be enforced by constraints on the expectation values of differential operators. Although the hyperparameters, covariance, and linear response to perturbations can be estimated by a variety of statistical methods, we develop the Bayesian interpretation. The linear response of the MAP estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood. The method is demonstrated on standard data sets.

Silver, R.N.; Martz, H.F.; Wallstrom, T.

1993-11-01

45

Density estimation by maximum quantum entropy

A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets.

Silver, R.N.; Wallstrom, T.; Martz, H.F.

1993-11-01

46

Sampling, Density Estimation and Spatial Relationships

NSDL National Science Digital Library

This resource serves as a tool used for instructing a laboratory exercise in ecology. Students obtain hands-on experience using techniques such as, mark-recapture and density estimation and organisms such as, zooplankton and fathead minnows. This exercise is suitable for general ecology and introductory biology courses.

Maggie Haag (University of Alberta;); William M. Tonn (;)

1998-01-01

47

A regression point of view toward density estimation

For nonparametric density estimation, we apply the idea of the local linear smoother of Fan (1993) to construct a new kernel density estimator. The asymptotic variance and the asymptotic bias of this density estimator are given. By these, this density estimator does not suffer from boundary effects in the case that boundary points of the support of the underlying density

C. Z. Wei; C. K. Chu

1994-01-01

48

DENSITY ESTIMATION FOR PROJECTED EXOPLANET QUANTITIES

Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of 'projected' mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth {sigma}. We calibrate the dependence of {sigma} on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n{sup -log2}. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements-particularly on the minimum sample size n-for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

Brown, Robert A., E-mail: rbrown@stsci.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

2011-05-20

49

Density Estimation for Projected Exoplanet Quantities

NASA Astrophysics Data System (ADS)

Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of "projected" mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth ?. We calibrate the dependence of ? on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n -log 2. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements—particularly on the minimum sample size n—for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

Brown, Robert A.

2011-05-01

50

Stochastic model for estimation of environmental density

The environment density has been defined as the value of a habitat expressing its unfavorableness for settling of an individual which has a strong anti-social tendency to other individuals in an environment. Morisita studied anti-social behavior of ant-lions (Glemuroides japanicus) and provided a recurrence relation without an explicit solution for the probability distribution of individuals settling in each of two habitats in terms of the environmental densities and the numbers of individuals introduced. In this paper the recurrence relation is explicitly solved; certain interesting properties of the distribution are discussed including the estimation of the parameters. 4 references, 1 table.

Janardan, K.G.; Uppuluri, V.R.R.

1984-01-01

51

Wavelet-based detection of clods on a soil surface

NASA Astrophysics Data System (ADS)

One of the aims of the tillage operation is to produce a specific range of clod sizes, suitable for plant emergence. Due to its cloddy structure, a tilled soil surface has its own roughness, which is connected also with soil water content and erosion phenomena. The comprehension and modeling of surface runoff and erosion require that the micro-topography of the soil surface is well estimated. Therefore, the present paper focuses on the soil surface analysis and characterization. An original method consisting in detecting the individual clods or large aggregates on a 3D digital elevation model (DEM) of the soil surface is introduced. A multiresolution decomposition of the surface is performed by wavelet transform. Then a supervised local maxima extraction is performed on the different sub surfaces and a last process makes the validation of the extractions and the merging of the different scales. The method of detection was evaluated with the help of a soil scientist on a controlled surface made in the laboratory as well as on real seedbed and ploughed surfaces, made by tillage operations in an agricultural field. The identifications of the clods are in good agreement, with an overall sensitivity of 84% and a specificity of 94%. The false positive or false negative detections may have several causes. Some very nearby clods may have been smoothed together in the approximation process. Other clods may be embedded into another peace of the surface relief such as another bigger clod or a part of the furrow. At last, the low levels of decomposition are dependent on the resolution and the measurement noise of the DEM. Therefore, some borders of clods may be difficult to determine. The wavelet-based detection method seems to be suitable for soil surfaces described by 2 or 3 levels of approximation such as seedbeds.

Vannier, E.; Ciarletti, V.; Darboux, F.

2009-11-01

52

A novel wavelet based ICA technique using Kurtosis

We consider the problem of blind audio source separation. A method to solve this problem is blind source separation (BSS) using independent component analysis (ICA). ICA exploits the non-Gaussianity of source in the mixtures. In this paper we propose a new wavelet based ICA method using Kurtosis for blind audio source separation. In this method, the observations are transformed into

M. R. Mirarab; H. Dehghani; A. Pourmohammad

2010-01-01

53

Wavelet-based neural network for power disturbance classification

In this paper a wavelet-based neural network classifier for recognizing power quality disturbances is implemented and tested under various transient events. The discrete wavelet (DWT) technique is integrated with the probabilistic neural network (PNN) model to construct the classifier. First, the multi-resolution analysis (MRA) technique of DWT and the Parseval's theorem are employed to extract the energy distribution features of

Zwe-Lee Gaing; Hou-Sheng Huang

2003-01-01

54

Wavelet-based lossless compression scheme with progressive transmission capability

Lossless image compression with progressive transmis- sion capabilities plays a key role in measurement applications, requir- ing quantitative analysis and involving large sets of images. This work proposes a wavelet-based compression scheme that is able to op- erate in the lossless mode. The quantization module implements a new technique for the coding of the wavelet coefficients that is more effective

Adrian Munteanu; Jan Cornelis; Geert Van der Auwera; Paul Cristea

1999-01-01

55

Wavelet Based Lossless Compression of Coronary Angiography Images

The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet

Adrian Munteanu; Jan Cornelis; Paul Cristea

1999-01-01

56

Wavelet-based analysis of blood pressure dynamics in rats

NASA Astrophysics Data System (ADS)

Using a wavelet-based approach, we study stress-induced reactions in the blood pressure dynamics in rats. Further, we consider how the level of the nitric oxide (NO) influences the heart rate variability. Clear distinctions for male and female rats are reported.

Pavlov, A. N.; Anisimov, A. A.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznecova, A. S.; Matasova, E. G.

2009-02-01

57

MAMMOGRAPHIC MASS CLASSIFICATION USING WAVELET BASED SUPPORT VECTOR MACHINE

In this paper, we investigate an approach for classification of mammographic masses as benign or malign. This study relies on a combination of Support Vector Machine (SVM) and wavelet-based subband image decomposition. Decision making was performed in two stages as feature extraction by computing the wavelet coefficients and classification using the classifier trained on the extracted features. SVM, a learning

Pelin GORGEL; Ahmet SERTBA; Niyazi KILIC; Osman N. UCAN; Onur OSMAN

58

Bird population density estimated from acoustic signals

Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

Dawson, D. K.; Efford, M. G.

2009-01-01

59

Improved Astronomical Inferences via Nonparametric Density Estimation

NASA Astrophysics Data System (ADS)

Nonparametric and semiparametric approaches to density estimation can yield scientific insights unavailable when restrictive assumptions are made regarding the form of the distribution. Further, when a well-chosen dimension reduction technique is utilized, the distribution of high-dimensional data (e.g., spectra, images) can be characterized via a nonparametric approach. The hope is that these procedures will preserve a large amount of the rich information in these data. Ideas will be illustrated via a semiparametric approach to estimating luminosity functions (Schafer, 2007) and recent work on characterizing the evolution of the distribution of galaxy morphology. This is joint work with Peter Freeman, Susan Buchman, and Ann Lee. Work is supported by NASA AISR Grant.

Schafer, Chad

2010-01-01

60

A comparative evaluation of wavelet-based methods for hypothesis testing of brain activation maps.

Wavelet-based methods for hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasise convergence between methods of wavelet thresholding or shrinkage and the problem of hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on the trade-off between type I probability error control and power dissipation, estimated by the area under the ROC curve. We describe a technique for controlling the false discovery rate at an arbitrary level of error in testing multiple wavelet coefficients generated by a 2D discrete wavelet transform (DWT) of spatial maps of fMRI time series statistics. We also describe and apply change-point detection with recursive hypothesis testing methods that can be used to define a threshold unique to each level and orientation of the 2D-DWT, and Bayesian methods, incorporating a formal model for the anticipated sparseness of wavelet coefficients representing the signal or true image. The sensitivity and type I error control of these algorithms are comparatively evaluated by analysis of "null" images (acquired with the subject at rest) and an experimental data set acquired from five normal volunteers during an event-related finger movement task. We show that all three wavelet-based algorithms have good type I error control (the FDR method being most conservative) and generate plausible brain activation maps (the Bayesian method being most powerful). We also generalise the formal connection between wavelet-based methods for simultaneous multiresolution denoising/hypothesis testing and methods based on monoresolution Gaussian smoothing followed by statistical testing of brain activation maps. PMID:15528111

Fadili, M J; Bullmore, E T

2004-11-01

61

A Wavelet-Based Data Compression Technique for Smart Grid

This paper proposes a wavelet-based data compres- sion approach for the smartgrid(SG). Inparticular,wavelettrans- form (WT)-based multiresolution analysis (MRA), as well as its properties, are studied for its data compression and denoising ca- pabilities for power system signals in SG. Selection of the Order 2 Daubechies wavelet and scale 5 as the best wavelet function and the optimal decomposition scale, respectively,

Jiaxin Ning; Jianhui Wang; Wenzhong Gao; Cong Liu

2011-01-01

62

Fast wavelet based algorithms for linear evolution equations

NASA Technical Reports Server (NTRS)

A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

1992-01-01

63

Wavelet-based statistical signal processing using hidden Markov models

Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are

Matthew S. Crouse; Robert D. Nowak; Richard G. Baraniuk

1998-01-01

64

ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER

This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...

65

Nonparametric regression and density estimation in Besov spaces via wavelets

For density estimation and nonparametric regression, block thresholding is very adaptive and efficient over a variety of general function spaces. By using block thresholding on kernel density estimators, the optimal minimax rates of convergence of the estimator to the true distribution are attained. This rate holds for large classes of densities residing in Besov spaces, including discontinuous functions with the

Eric Karl Chicken

2001-01-01

66

ON ADAPTIVE ESTIMATION FOR LOCALLY STATIONARY WAVELET PROCESSES AND ITS APPLICATIONS

The class of locally stationary wavelet processes is a wavelet-based model for covariance nonstationary zero mean time series. This paper presents an algorithm for the pointwise adaptive estimation of their time-varying spectral density. The performance of the pro- cedure is evaluated on simulated and real time series. Two applications of the procedure are also presented and evaluated on real data.

Rainer von Sachs

67

Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

2005-04-11

68

Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

NASA Astrophysics Data System (ADS)

The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

2013-02-01

69

Wavelet based hierarchical coding scheme for radar image compression

NASA Astrophysics Data System (ADS)

This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

Sheng, Wen; Jiao, Xiaoli; He, Jifeng

2007-11-01

70

EEG analysis using wavelet-based information tools.

Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027

Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

2006-06-15

71

Mammographic Density Estimation with Automated Volumetric Breast Density Measurement

Objective To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. Materials and Methods In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. Results The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p = 0.001 to 0.015). Conclusion There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.

Ko, Su Yeon; Kim, Eun-Kyung; Kim, Min Jung

2014-01-01

72

Density estimation and random variate generation using multilayer networks

In this paper we consider two important topics: density estimation and random variate generation. We present a framework that is easily implemented using the familiar multilayer neural network. First, we develop two new methods for density estimation, a stochastic method and a related deterministic method. Both methods are based on approximating the distribution function, the density being obtained by differentiation.

Malik Magdon-Ismail; Amir Atiya

2002-01-01

73

Remarks on Some Nonparametric Estimates of a Density Function

This note discusses some aspects of the estimation of the density function of a univariate probability distribution. All estimates of the density function satisfying relatively mild conditions are shown to be biased. The asymptotic mean square error of a particular class of estimates is evaluated.

Murray Rosenblatt

1956-01-01

74

EQUIVALENT CONDITICINS FOB THE CONSISTENCY OF NONPARAMETRIC SPLINE DENSITY ESTIMATORS

We study the nonparametric spline density estimators of probability density. The equivalence of weak convergence for L,-con- sistency of one density and completely for L,-consistency of all densities is proved. It is equivalent also to suitable rates of convergence of window parameter.

GRZEGORZ KRZYKOWSKI

1992-01-01

75

NASA Astrophysics Data System (ADS)

Traditional wavelet-based speech enhancement algorithms are ineffective in the presence of highly non-stationary noise because of the difficulties in the accurate estimation of the local noise spectrum. In this paper, a simple method of noise estimation employing the use of a voice activity detector is proposed. We can improve the output of a wavelet-based speech enhancement algorithm in the presence of random noise bursts according to the results of VAD decision. The noisy speech is first preprocessed using bark-scale wavelet packet decomposition (BSWPD) to convert a noisy signal into wavelet coefficients (WCs). It is found that the VAD using bark-scale spectral entropy, called as BS-Entropy, parameter is superior to other energy-based approach especially in variable noise-level. The wavelet coefficient threshold (WCT) of each subband is then temporally adjusted according to the result of VAD approach. In a speech-dominated frame, the speech is categorized into either a voiced frame or an unvoiced frame. A voiced frame possesses a strong tone-like spectrum in lower subbands, so that the WCs of lower-band must be reserved. On the contrary, the WCT tends to increase in lower-band if the speech is categorized as unvoiced. In a noise-dominated frame, the background noise can be almost completely removed by increasing the WCT. The objective and subjective experimental results are then used to evaluate the proposed system. The experiments show that this algorithm is valid on various noise conditions, especially for color noise and non-stationary noise conditions.

Wang, Kun-Ching

76

Wavelet based free-form deformations for nonrigid registration

NASA Astrophysics Data System (ADS)

In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

Sun, Wei; Niessen, Wiro J.; Klein, Stefan

2014-03-01

77

Wavelet-based multifractal analysis of laser biopsy imagery

NASA Astrophysics Data System (ADS)

In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.

Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima

2012-02-01

78

Density estimation using the trapping web design: A geometric analysis

Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

Link, W.A.; Barker, R. J.

1994-01-01

79

Smooth density estimation with moment constraints using mixture distributions

Statistical analysis often involves the estimation of a probability density based on a sample of observations. A commonly used nonparametric method for solving this problem is the kernel-based method. The motivation is that any continuous density can be approximated by a mixture of densities with appropriately chosen bandwidths. In many practical applications, we may have specific information about the moments

Ani Eloyan; Sujit K. Ghosh

2011-01-01

80

Estimation of volumetric breast density for breast cancer risk prediction

Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the

Olga Pawluczyk; Martin J. Yaffe; Norman F. Boyd; Roberta A. Jong

2000-01-01

81

Morphology driven density distribution estimation for small bodies

NASA Astrophysics Data System (ADS)

We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field environment in close proximity to small body’s surface for accurate trajectory/safe navigation purposes to be used for future missions.

Takahashi, Yu; Scheeres, D. J.

2014-05-01

82

Improved estimation of discrete probability density functions using multirate models

For many decades, the problem of estimating a pdf based on measurements has been of interest to many researchers. Even though much work has been done in the area of pdf estimation, most of it was focused on the continuous case. In this paper, we propose a new model based approach for estimating a discrete probability density function. This approach

Byung-Jun Yoon; P. P. Vaidyanathan

2003-01-01

83

Neutral wind estimation from 4-D ionospheric electron density images

We develop a new inversion algorithm for Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE method uses four-dimensional images of global electron density to estimate the field-aligned neutral wind ionospheric driver when direct measurement is not available. We begin with a model of the electron continuity equation that includes production and loss rate estimates, as well as E

S. Datta-Barua; G. S. Bust; G. Crowley; N. Curtis

2009-01-01

84

Quantiles, Parametric-Select Density Estimations, and Bi-Information Parameter Estimators.

National Technical Information Service (NTIS)

This paper outlines a quantile-based approach to functional inference problems in which the parameters to be estimated are density functions. Exponential models and autoregressive models are approximating densities which can be justified as maximum entrop...

E. Parzen

1982-01-01

85

Kernel density estimation is a widely used statistical tool and bandwidth selection is critically important. The Sheather and Jones’ (SJ) selector [A reliable data-based bandwidth selection method for kernel density estimation, J. R. Stat. Soc. Ser. B 53 (1991), pp. 683–690] remains the best available data-driven bandwidth selector. It can, however, perform poorly if the true density deviates too much

J. G. Liao; Yujun Wu; Yong Lin

2010-01-01

86

Wavelet-based face verification for constrained platforms

NASA Astrophysics Data System (ADS)

Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.

Sellahewa, Harin; Jassim, Sabah A.

2005-03-01

87

An image adaptive, wavelet-based watermarking of digital images

NASA Astrophysics Data System (ADS)

In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

2007-12-01

88

Complex wavelet based speckle reduction using multiple ultrasound images

NASA Astrophysics Data System (ADS)

Ultrasound imaging is a dominant tool for diagnosis and evaluation in medical imaging systems. However, as its major limitation is that the images it produces suffer from low quality due to the presence of speckle noise, to provide better clinical diagnoses, reducing this noise is essential. The key purpose of a speckle reduction algorithm is to obtain a speckle-free high-quality image whilst preserving important anatomical features, such as sharp edges. As this can be better achieved using multiple ultrasound images rather than a single image, we introduce a complex wavelet-based algorithm for the speckle reduction and sharp edge preservation of two-dimensional (2D) ultrasound images using multiple ultrasound images. The proposed algorithm does not rely on straightforward averaging of multiple images but, rather, in each scale, overlapped wavelet detail coefficients are weighted using dynamic threshold values and then reconstructed by averaging. Validation of the proposed algorithm is carried out using simulated and real images with synthetic speckle noise and phantom data consisting of multiple ultrasound images, with the experimental results demonstrating that speckle noise is significantly reduced whilst sharp edges without discernible distortions are preserved. The proposed approach performs better both qualitatively and quantitatively than previous existing approaches.

Uddin, Muhammad Shahin; Tahtali, Murat; Pickering, Mark R.

2014-04-01

89

Non-iterative wavelet-based deconvolution for sparse aperturesystem

NASA Astrophysics Data System (ADS)

Optical sparse aperture imaging is a promising technology to obtain high resolution but with a significant reduction in size and weight by minimizing the total light collection area. However, with the decreasing of collection area, its OTF is also greatly attenuated, and thus the directly imaging quality of sparse aperture system is very poor. In this paper, we focus on the post-processing methods for sparse aperture systems, and propose a non-iterative wavelet-based deconvolution algorithm. The algorithm is performed by adaptively denoising the Fourier-based deconvolution results on the wavelet basis. We set up a Golay-3 sparse-aperture imaging system, where the imaging and deconvolution experiments of the natural scenes are performed. The experiments demonstrate that the proposed method has greatly improved the imaging quality of Golay-3 sparse-aperture system, and produce satisfactory visual quality. Furthermore, our experimental results also indicate that the sparse aperture system has the potential to reach higher resolution with the help of better post-processing deconvolution techniques.

Xu, Wenhai; Zhao, Ming; Li, Hongshu

2013-05-01

90

Utilizing verification and validation certificates to estimate software defect density

In industry, information on defect density of a product tends to become available too late in the software development process to affordably guide corrective actions. Our research objective is to build a parametric model which utilizes a persistent record of the validation and verification (V&V) practices used with a program to estimate the defect density of that program. The persistent

Mark Sherriff

2005-01-01

91

A Comparison of Mixture Models for Density Estimation

. Gaussian mixture models (GMMs)are a popular tool for density estimation. However,these models are limited by the fact thatthey either impose strong constraints on the covariancematrices of the component densities orno constraints at all. This paper presents anexperimental comparison of GMMs and the recentlyintroduced mixtures of linear latent variablemodels. It is shown that the latter modelsare a more exible alternative

Perry Moerland

1999-01-01

92

Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired. PMID:23669913

Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

2013-05-01

93

A wavelet-based approach to face verification/recognition

NASA Astrophysics Data System (ADS)

Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

Jassim, Sabah; Sellahewa, Harin

2005-10-01

94

Wavelet-based ground vehicle recognition using acoustic signals

NASA Astrophysics Data System (ADS)

We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will not present the mathematics involved in this research. Instead, the focus of this paper is on the application of various techniques used to achieve our goal of successful recognition.

Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

1996-03-01

95

EnBiD: Fast Multi-dimensional Density Estimation

NASA Astrophysics Data System (ADS)

We present a method to numerically estimate the densities of a discretely sampled data based on a binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles, until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density and its shape provides an estimate of the variance. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We use this method to determine the appropriate metric at each point in space and then use kernel-based methods for calculating the density. The kernel-smoothed estimates were found to be more accurate and have lower dispersion. We apply this method to determine the phase-space densities of dark matter haloes obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function v(f) of phase-space density f does not have a constant slope but rather a small hump at high phase-space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of v(f) versus f relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behaviour as seen in simulations. The use of the presented method is not limited to calculation of phase-space densities, but can be used as a general purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.

Sharma, Sanjib; Steinmetz, Matthias

2011-09-01

96

Quantiles, parametric-select density estimation, and bi-information parameter estimators

NASA Technical Reports Server (NTRS)

A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.

Parzen, E.

1982-01-01

97

Density-ratio robustness in dynamic state estimation

NASA Astrophysics Data System (ADS)

The filtering problem is addressed by taking into account imprecision in the knowledge about the probabilistic relationships involved. Imprecision is modelled in this paper by a particular closed convex set of probabilities that is known with the name of density ratio class or constant odds-ratio (COR) model. The contributions of this paper are the following. First, we shall define an optimality criterion based on the squared-loss function for the estimates derived from a general closed convex set of distributions. Second, after revising the properties of the density ratio class in the context of parametric estimation, we shall extend these properties to state estimation accounting for system dynamics. Furthermore, for the case in which the nominal density of the COR model is a multivariate Gaussian, we shall derive closed-form solutions for the set of optimal estimates and for the credible region. Third, we discuss how to perform Monte Carlo integrations to compute lower and upper expectations from a COR set of densities. Then we shall derive a procedure that, employing Monte Carlo sampling techniques, allows us to propagate in time both the lower and upper state expectation functionals and, thus, to derive an efficient solution of the filtering problem. Finally, we empirically compare the proposed estimator with the Kalman filter. This shows that our solution is more robust to the presence of modelling errors in the system and that, hence, appears to be a more realistic approach than the Kalman filter in such a case.

Benavoli, Alessio; Zaffalon, Marco

2013-05-01

98

Comparison of neuron selection algorithms of wavelet-based neural network

NASA Astrophysics Data System (ADS)

Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.

Mei, Xiaodan; Sun, Sheng-He

2001-09-01

99

A Wavelet-Based Noise Reduction Algorithm and Its Clinical Evaluation in Cochlear Implants

Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise.

Ye, Hua; Deng, Guang; Mauger, Stefan J.; Hersbach, Adam A.; Dawson, Pam W.; Heasman, John M.

2013-01-01

100

Fast wavelet-based image characterization for highly adaptive image retrieval.

Adaptive wavelet-based image characterizations have been proposed in previous works for content-based image retrieval (CBIR) applications. In these applications, the same wavelet basis was used to characterize each query image: This wavelet basis was tuned to maximize the retrieval performance in a training data set. We take it one step further in this paper: A different wavelet basis is used to characterize each query image. A regression function, which is tuned to maximize the retrieval performance in the training data set, is used to estimate the best wavelet filter, i.e., in terms of expected retrieval performance, for each query image. A simple image characterization, which is based on the standardized moments of the wavelet coefficient distributions, is presented. An algorithm is proposed to compute this image characterization almost instantly for every possible separable or nonseparable wavelet filter. Therefore, using a different wavelet basis for each query image does not considerably increase computation times. On the other hand, significant retrieval performance increases were obtained in a medical image data set, a texture data set, a face recognition data set, and an object picture data set. This additional flexibility in wavelet adaptation paves the way to relevance feedback on image characterization itself and not simply on the way image characterizations are combined. PMID:22194244

Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Cochener, Béatrice; Roux, Christian

2012-04-01

101

Semiparametric Curve Alignment and Shift Density Estimation for Biological Data

NASA Astrophysics Data System (ADS)

Assume that we observe a large number of curves, all of them with identical, although unknown, shape, but with a different random shift. The objective is to estimate the individual time shifts and their distribution. Such an objective appears in several biological applications like neuroscience or ECG signal processing, in which the estimation of the distribution of the elapsed time between repetitive pulses with a possibly low signal-noise ratio, and without a knowledge of the pulse shape is of interest. We suggest an M-estimator leading to a three-stage algorithm: we split our data set in blocks, on which the estimation of the shifts is done by minimizing a cost criterion based on a functional of the periodogram; the estimated shifts are then plugged into a standard density estimator. We show that under mild regularity assumptions the density estimate converges weakly to the true shift distribution. The theory is applied both to simulations and to alignment of real ECG signals. The estimator of the shift distribution performs well, even in the case of low signal-to-noise ratio, and is shown to outperform the standard methods for curve alignment.

Trigano, Thomas; Isserles, Uri; Ritov, Ya'acov

2011-05-01

102

An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments

In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes.

Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro

2013-01-01

103

Double sampling to estimate density and population trends in birds

We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

Bart, Jonathan; Earnst, Susan L.

2002-01-01

104

The purpose of the present study was to investigate the use of various wavelets based techniques for denoising of [11C](R)-PK11195 time activity curves (TACs) in order to improve accuracy and precision of PET kinetic parameters, such as volume of distribution (V(T)) and distribution volume ratio with reference region (DVR). Simulated and clinical TACs were filtered using two different categories of wavelet filters: (1) wave shrinking thresholds using a constant or a newly developed time varying threshold and (2) "statistical" filters, which filter extreme wavelet coefficients using a set of "calibration" TACs. PET pharmacokinetic parameters were estimated using linear models (plasma Logan and reference Logan analyses). For simulated noisy TACs, optimized wavelet based filters improved the residual sum of squared errors with the original noise free TACs. Furthermore, both clinical results and simulations were in agreement. Plasma Logan V(T) values increased after filtering, but no differences were seen in reference Logan DVR values. This increase in plasma Logan V(T) suggests a reduction of noise induced bias by wavelet based denoising, as was seen in the simulations. Wavelet denoising of TACs for [11C](R)-PK11195 PET studies is therefore useful when parametric Logan based V(T) is the parameter of interest. PMID:19070241

Yaqub, Maqsood; Boellaard, Ronald; Schuitemaker, Alie; van Berckel, Bart N M; Lammertsmae, Adriaan A

2008-11-01

105

Probability density estimation with tunable kernels using orthogonal forward regression.

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately. PMID:20007052

Chen, Sheng; Hong, Xia; Harris, Chris J

2010-08-01

106

Density estimation in tiger populations: combining information for strong inference

A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

2012-01-01

107

Improved Fast Gauss Transform and Efficient Kernel Density Estimation

Abstract Evaluating sums of multivariate Gaussians is a common computational task in computer vision and pattern recogni - tion, including in the general and powerful kernel density estimation technique The quadratic computational com - plexity of the summation is a significant barrier to the scal - ability of this algorithm to practical applications The fast Gauss transform (FGT) has successfully

Changjiang Yang; Ramani Duraiswami; Nail A. Gumerov; Larry S. Davis

2003-01-01

108

Deconvolving multivariate kernel density estimates from contaminated associated observations

We consider the estimation of the multivariate probability density function f(x1,...,xp) of X1,...,Xp of a stationary positively or negatively associated (PA or NA) random process {Xi}i=1? from noisy observations. Both ordinary smooth and super smooth noise are considered. Quadratic mean and asymptotic normality results are established.

Elias Masry

2003-01-01

109

Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters. PMID:18784025

Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

2008-10-01

110

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

111

The Effect of Lidar Point Density on LAI Estimation

NASA Astrophysics Data System (ADS)

Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful in forest inventory applications and tree health evaluations. However, such high-density data is often not available over large areas. In this study we progressively downsampled the high-density discrete lidar data and evaluated the effect on LAI estimation. The AccuPAR data was used as validation and results were compared to existing LAI metrics. This will enable us to determine the minimum point density required for airborne lidar LAI retrieval. Preliminary results show that the data may be substantially thinned to estimate site-level LAI. More detailed results will be presented at the conference.

Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

2013-12-01

112

Can modeling improve estimation of desert tortoise population densities?

The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

Nussear, K. E.; Tracy, C. R.

2007-01-01

113

A contact algorithm for density-based load estimation.

An algorithm, which includes contact interactions within a joint, has been developed to estimate the dominant loading patterns in joints based on the density distribution of bone. The algorithm is applied to the proximal femur of a chimpanzee, gorilla and grizzly bear and is compared to the results obtained in a companion paper that uses a non-contact (linear) version of the density-based load estimation method. Results from the contact algorithm are consistent with those from the linear method. While the contact algorithm is substantially more complex than the linear method, it has some added benefits. First, since contact between the two interacting surfaces is incorporated into the load estimation method, the pressure distributions selected by the method are more likely indicative of those found in vivo. Thus, the pressure distributions predicted by the algorithm are more consistent with the in vivo loads that were responsible for producing the given distribution of bone density. Additionally, the relative positions of the interacting bones are known for each pressure distribution selected by the algorithm. This should allow the pressure distributions to be related to specific types of activities. The ultimate goal is to develop a technique that can predict dominant joint loading patterns and relate these loading patterns to specific types of locomotion and/or activities. PMID:16439233

Bona, Max A; Martin, Larry D; Fischer, Kenneth J

2006-01-01

114

Semiautomatic estimation of breast density with DM-Scan software.

OBJECTIVE: To evaluate the reproducibility of the calculation of breast density with DM-Scan software, which is based on the semiautomatic segmentation of fibroglandular tissue, and to compare it with the reproducibility of estimation by visual inspection. MATERIAL AND METHODS: The study included 655 direct digital mammograms acquired using craniocaudal projections. Three experienced radiologists analyzed the density of the mammograms using DM-Scan, and the inter- and intra-observer agreement between pairs of radiologists for the Boyd and BI-RADS(®) scales were calculated using the intraclass correlation coefficient. The Kappa index was used to compare the inter- and intra-observer agreements with those obtained previously for visual inspection in the same set of images. RESULTS: For visual inspection, the mean interobserver agreement was 0,876 (95% CI: 0,873-0,879) on the Boyd scale and 0,823 (95% CI: 0,818-0,829) on the BI-RADS(®) scale. The mean intraobserver agreement was 0,813 (95% CI: 0,796-0,829) on the Boyd scale and 0,770 (95% CI: 0,742-0,797) on the BI-RADS(®) scale. For DM-Scan, the mean inter- and intra-observer agreement was 0,92, considerably higher than the agreement for visual inspection. CONCLUSION: The semiautomatic calculation of breast density using DM-Scan software is more reliable and reproducible than visual estimation and reduces the subjectivity and variability in determining breast density. PMID:23489767

Martínez Gómez, I; Casals El Busto, M; Antón Guirao, J; Ruiz Perales, F; Llobet Azpitarte, R

2013-03-01

115

Estimating black bear density using DNA data from hair snares

DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

Gardner, B.; Royle, J. A.; Wegan, M. T.; Rainbolt, R. E.; Curtis, P. D.

2010-01-01

116

Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar

NASA Astrophysics Data System (ADS)

The unique coupling of the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the EISCAT Svalbard Radar (ESR) using the year-long operations of the first year of the International Polar Year (IPY) from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradient and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of IPY during the recent very quiet solar minimum. Comparison with the MSIS model shows that at 250 km, close to the F-layer peak the ESR estimates of the atomic oxygen density are typically a factor 1.2 smaller than the MSIS model when data are averaged over the IPY. Differences between MSIS and ESR estimates are found also to depend on both season and magnetic disturbance, with largest discrepancies noted during winter months. At 350 km, very close agreement with the MSIS model is achieved without evidence of seasonal dependence. This altitude was also close to the orbital altitude of the CHAMP satellite during IPY, allowing a comparison of in-situ measurements and radar estimates of the neutral density. Using a total of 10 in-situ passes by the CHAMP satellite above Svalbard, we show that the estimates made using this technique fall within the error bars of the measurements. We show that the method works best in the height range ~300-400 km where our assumptions are satisfied and we anticipate that the technique should be suitable for future thermospheric studies related to geomagnetic storm activity and long-term climate change.

Vickers, H.; Kosch, M. J.; Sutton, E. K.; Ogawa, Y.; La Hoz, C.

2012-12-01

117

Volume estimation of multi-density nodules with thoracic CT

NASA Astrophysics Data System (ADS)

The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.

Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas

2014-03-01

118

Differentiating Between Images Using Wavelet-Based Transforms: A Comparative Study

We propose statistical image models for wavelet- based transforms, investigate their use, and compare their rel- ative merits within the context of digital image forensics. We consider the problems of 1) differentiating computer graphics images from photographic images, 2) source camera and source scanner identification, and 3) source artist identification from digital painting samples. The features obtained from ridgelet and

Levent Ozparlak; Ismail Avcibas

2011-01-01

119

Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency

Most simple nonlinear thresholding rules for wavelet-based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. We only consider the dependencies between the coefficients and their parents in detail. For this purpose, new non-Gaussian bivariate distributions are proposed, and corresponding nonlinear threshold functions (shrinkage functions) are derived from the models using Bayesian

Levent Sendur; Ivan W. Selesnick

2002-01-01

120

DSP Wavelet-Based Tool for Monitoring Transformer Inrush Currents and Internal Faults

This paper proposes a wavelet-based technique for monitoring nonstationary variations in order to distinguish between transformer inrush currents and transformer internal faults. The proposed technique utilizes a small set of coefficients of the local maxima that represent most of the signal's energy: only one coefficient at each resolution level is utilized to measure the magnitude of the variation in the

A. M. Gaouda; M. M. A. Salama

2010-01-01

121

Multiresolution analysis on zero-dimensional Abelian groups and wavelets bases

For a locally compact zero-dimensional group (G,+{sup .}), we build a multiresolution analysis and put forward an algorithm for constructing orthogonal wavelet bases. A special case is indicated when a wavelet basis is generated from a single function through contractions, translations and exponentiations. Bibliography: 19 titles.

Lukomskii, Sergei F [Saratov State University, Saratov (Russian Federation)

2010-06-29

122

Wavelet based spectral finite element modelling and detection of de-lamination in composite beams

In this paper, a model for composite beam with embedded de-lamination is developed using the wavelet based spectral finite element (WSFE) method particularly for damage detection using wave propagation analysis. The simulated responses are used as surrogate experimental results for the inverse problem of detection of damage using wavelet filtering. The WSFE technique is very similar to the fast fourier

M. Mitra; S. Gopalakrishnan

2006-01-01

123

Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar

NASA Astrophysics Data System (ADS)

Coupling between the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the European Incoherent Scatter Svalbard Radar (ESR) using the year-long operations of the International Polar Year from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradients and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of the International Polar Year during the recent very quiet solar minimum. The method works best in the height range ~300-400 km where our assumptions are satisfied. Differences between Mass Spectrometer and Incoherent Scatter and ESR estimates are found to vary with altitude, season, and magnetic disturbance, with the largest discrepancies during the winter months. A total of 9 out of 10 in situ passes by the CHAMP satellite above Svalbard at 350 km altitude agree with the ESR neutral density estimates to within the error bars of the measurements during quiet geomagnetic periods.

Vickers, H.; Kosch, M. J.; Sutton, E.; Ogawa, Y.; La Hoz, C.

2013-03-01

124

Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

Hanson, L. B.; Grand, J. B.; Mitchell, M. S.; Jolley, D. B.; Sparklin, B. D.; Ditchkoff, S. S.

2008-01-01

125

The probabilistic estimate of the solvent content (Matthews probability) was first introduced in 2003. Given that the Matthews probability is based on prior information, revisiting the empirical foundation of this widely used solvent-content estimate is appropriate. The parameter set for the original Matthews probability distribution function employed in MATTPROB has been updated after ten years of rapid PDB growth. A new nonparametric kernel density estimator has been implemented to calculate the Matthews probabilities directly from empirical solvent-content data, thus avoiding the need to revise the multiple parameters of the original binned empirical fit function. The influence and dependency of other possible parameters determining the solvent content of protein crystals have been examined. Detailed analysis showed that resolution is the primary and dominating model parameter correlated with solvent content. Modifications of protein specific density for low molecular weight have no practical effect, and there is no correlation with oligomerization state. A weak, and in practice irrelevant, dependency on symmetry and molecular weight is present, but cannot be satisfactorily explained by simple linear or categorical models. The Bayesian argument that the observed resolution represents only a lower limit for the true diffraction potential of the crystal is maintained. The new kernel density estimator is implemented as the primary option in the MATTPROB web application at http://www.ruppweb.org/mattprob/. PMID:24914969

Weichenberger, Christian X; Rupp, Bernhard

2014-06-01

126

Super Learner Based Conditional Density Estimation with Application to Marginal Structural Models

In this paper, we present a histogram-like estimator of a conditional density that uses cross-validation to estimate the histogram probabilities, as well as the optimal number and position of the bins. This estimator is an alternative to kernel density estimators when the dimension of the covariate vector is large. We demonstrate its applicability to estimation of Marginal Structural Model (MSM)

Mark J. van der Laan

2011-01-01

127

Wavelet-based image denoising using variance field diffusion

NASA Astrophysics Data System (ADS)

Wavelet shrinkage is an image restoration technique based on the concept of thresholding the wavelet coefficients. The key challenge of wavelet shrinkage is to find an appropriate threshold value, which is typically controlled by the signal variance. To tackle this challenge, a new image restoration approach is proposed in this paper by using a variance field diffusion, which can provide more accurate variance estimation. Experimental results are provided to demonstrate the superior performance of the proposed approach.

Liu, Zhenyu; Tian, Jing; Chen, Li; Wang, Yongtao

2012-04-01

128

A wavelet based method for SPECT reconstruction with non-uniform attenuation

NASA Astrophysics Data System (ADS)

SPECT (single photon emission computed tomography) is a tomography technique that can greatly show information about the metabolic activity in body and improve clinical diagnosis. In SPECT, because of photoelectric absorption and Compton scattering, the emitted gamma photons are attenuated inside the body before arriving at the detector. The goal of quantitative SPECT reconstruction is to obtain an accurate reconstructed image of the radioactivity distribution in the interested area of a human body, so the compensation for non-uniform attenuation is necessary in the quantitative SPECT reconstruction. In this paper, based on the explicit inversion formula for the attenuated Radon transform discovered by R. Novikov, we present a wavelet based SPECT reconstruction algorithm with non-uniform attenuation. We know that the wavelet transform has characteristics of multi-resolution analysis and localized analysis, and these characteristics can be applied to de-noising and localized reconstruction. Simulation results show that our wavelet based SPECT reconstruction algorithm is accurate.

Wen, Junhai; Kong, Lingkai

2007-03-01

129

Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

NASA Astrophysics Data System (ADS)

T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbie?, A.; Opolski, G.; Maniewski, R.

2011-01-01

130

We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging.

Chang, Ching-Wei; Mycek, Mary-Ann

2014-01-01

131

Estimating tropical-forest density profiles from multibaseline interferometric SAR

NASA Technical Reports Server (NTRS)

Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

2006-01-01

132

State-of-the-Art and Trends in Scalable Video Compression With Wavelet-Based Approaches

Scalable video coding (SVC) differs form traditional single point approaches mainly because it allows to encode in a unique bit stream several working points corresponding to different quality, picture size and frame rate. This work describes the current state-of-the-art in SVC, focusing on wavelet based motion-compensated approaches (WSVC). It reviews individual components that have been designed to address the problem

Nicola Adami; Alberto Signoroni; Riccardo Leonardi

2007-01-01

133

This paper presents the development of a wavelet-based scheme, for distinguishing between transformer inrush currents and power system fault currents, which proved to provide a reliable, fast, and computationally efficient tool. The operating time of the scheme is less than half the power frequency cycle (based on a 5-kHz sampling rate). In this work, a wavelet transform concept is presented.

Omar A. S. Youssef

2003-01-01

134

Wavelet-Based Image Reconstruction for Hard-Field Tomography With Severely Limited Data

We introduce a new wavelet-based hard-field image reconstruction method that is well suited for data inversion of lim- itedpath-integraldataobtainedfromageometricallysparsesensor array. It is applied to a chemical species tomography system based onnear-IRspectroscopicabsorptionmeasurements alonganirreg- ular array of only 27 paths. This system can be classified as pro- ducing severely limited data, where both the number of viewing anglesandthenumberofmeasurementsaresmall.Asshowninour previous work, the Landweber

N. Terzija; H. McCann

2011-01-01

135

Denoising Speech Signals for Digital Hearing Aids: A Wavelet Based Approach

\\u000a This study describes research developing a wavelet based, single microphone noise reduction algorithm for use in digital hearing\\u000a aids. The approach reduces noise by expanding the observed speech in a series of implicitly filtered, shift-invariant wavelet\\u000a packet basis vectors. The implicit filtering operation allows the method to reduce correlated noise while retaining low-level\\u000a high-frequency spectral components that are necessary for

Nathaniel Whitmal; Janet Rutledge; Jonathan Cohen

136

Wavelet-based efficient simulation of electromagnetic transients in a lightning protection system

In this paper, a wavelet-based efficient simulation of electromagnetic transients in a lightning protection systems (LPS) is presented. The analysis of electromagnetic transients is carried out by employing the thin-wire electric field integral equation in frequency domain. In order to easily handle the boundary conditions of the integral equation, semiorthogonal compactly supported spline wavelets, constructed for the bounded interval [0,1],

Guido Ala; Maria L. Di Silvestre; Elisa Francomano; Adele Tortorici

2003-01-01

137

Nonseparable Wavelet-based Cone-beam Reconstruction in 3-D Rotational Angiography

Abstract, In this paper, we propose a new wavelet-based re-construction method suited to three-dimensional (3-D) cone-beam (CB) tomography. It is derived from the Feldkamp algorithm and is valid for the same geometrical conditions. The demonstration is done in the framework of nonseparable wavelets and requires ideally radial wavelets. The proposed inversion formula yields to a filtered backprojection algorithm but the

Stéphane Bonnet; Françoise Peyrin; Francis Turjman; Rémy Prost

2003-01-01

138

Digital implementation of a wavelet-based event detector for cardiac pacemakers

This paper presents a digital hardware implementation of a novel wavelet-based event detector suitable for the next generation of cardiac pacemakers. Significant power savings are achieved by introducing a second operation mode that shuts down 2\\/3 of the hardware for long time periods when the pacemaker patient is not exposed to noise, while not degrading performance. Due to a 0.13-?m

Joachim Neves Rodrigues; Thomas Olsson; Leif Sörnmo; Viktor Öwall

2005-01-01

139

A wavelet-based method for simulation of two-dimensional elastic wave propagation

A wavelet-based method is introduced for the modelling of elastic wave propagation in 2-D media. The spatial derivative operators in the elastic wave equations are treated through wavelet transforms in a physical domain. The resulting second-order differential equations for time evolution are then solved via a system of first-order differential equations using a displacement-velocity formulation. With the combined aid of

Tae-Kyung Hong; B. L. N. Kennett

2002-01-01

140

A 2D wavelet-based spectral finite element method for elastic wave propagation

A wavelet-based spectral finite element method (WSFEM) is presented that may be used for an accurate and efficient analysis of elastic wave propagation in two-dimensional (2D) structures. The approach is characterised by a temporal transformation of the governing equations to the wavelet domain using a wavelet-Galerkin approach, and subsequently performing the spatial discretisation in the wavelet domain with the finite

L. Pahlavan; C. Kassapoglou; A. S. J. Suiker; Z. Gürdal

2012-01-01

141

WAVELET-BASED MOMENT METHOD AND PHYSICAL OPTICS USE ON LARGE REFLECTOR ANTENNAS

Abstract—With the recent advent on communication and satellite industry, there is a great need for efficient Reflector antennas systems, therefore more powerful techniques are requested for analysis and design ofnew reflector antennas in a quick and accurate manner. This work aim first to introduce wavelet-based moment method in 3D, as a recent and powerful numerical technique, which can be applied

M. Lashab; C. Zebiri; F. Benabdelaziz

2008-01-01

142

Implementation of a Wavelet-Based MRPID Controller for Benchmark Thermal System

This paper presents a comparative analysis of the intelligent controllers for temperature control of a benchmark thermal system. The performances of the proposed wavelet-based multiresolution proportional-integral derivative (PID) (MRPID) controller, which can also be stated as a multiresolution wavelet controller, are compared with the conventional PID controller and the adaptive neural-network (NN) controller. In the proposed MRPID temperature controller, the

M. A. S. K. Khan; M. Azizur Rahman

2010-01-01

143

Interactive region of interest scalability for wavelet based scalable video coder

The current wavelet-based scalable video coder supports quality, spatial and temporal scalabilities required for recent multimedia\\u000a applications. One of the key scalability required by the Joint Video Team (JVT) is Interactive Region of Interest (IROI) scalability.\\u000a This paper proposes a technique to interactively extract ROI from scalable sub bit-streams. At the encoder, after sub-band\\u000a decomposition the 3D wavelet tree is

A. K. KarunakarM; M. M. Manohara Pai

2011-01-01

144

Wavelet-Based Image Compression with Polygon-Shaped Region of Interest

\\u000a A wavelet-based lossy-to-lossless image compression technique with polygon-shaped ROI function is proposed. Firstly, split and mergence algorithms are proposed to separate concave ROIs into smaller convexROIs. Secondly, row-order scan and an adaptive arithmetic coding are used to encode the pixels in ROIs. Thirdly, a lifting integer wavelet transform is used to decompose the original image in which the pixels in

Yao-tien Chen; Din-chang Tseng; Pao-chi Chang

2006-01-01

145

Target Identification Using Wavelet-based Feature Extraction and Neural Network Classifiers

Classification of combat vehicle types based on acoustic and seismic signals remains a challenging task due to temporal and frequency variability that exists in these passively collected vehicle indicators. This paper presents the results of exploiting the wavelet characteristic of projecting signal dynamics to an efficient temporal\\/scale (i.e. frequency) decomposition and extracting from that process a set of wavelet-based features

Jose E. Lopez; Hung Han Chen; Jennifer Saulnier

146

The effectiveness of tape playbacks in estimating Black Rail densities

Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

1999-01-01

147

Column density estimation: Tree-based method implementation

NASA Astrophysics Data System (ADS)

The radiative transfer plays a crucial role in several astrophysical processes. In particular for the star formation problem it is well established that stars form in the densest and coolest regions in molecular clouds then understanding the interstellar cycle becomes crucial. The physics of dense gas requires the knowledge of the UV radiation that regulates the physics and the chemistry within the molecular cloud. The numerical modelization needs the calculation of column densities in any direction for each resolution element. In numerical simulations the cost of solving the radiative transfer problem is of the order of N^5/3, where N is the number of resolution elements. The exact calculation is in general extremely expensive in terms of CPU time for relatively large simulations and impractical in parallel computing. We present our tree-based method for estimating column densities and the attenuation factor for the UV field. The method is inspired by the fact that any distant cell subtends a small angle and therefore its contribution to the screening will be diluted. This method is suitable for parallel computing and no communication is needed between different CPUs. It has been implemented into the RAMSES code, a grid-based solver with adaptive mesh refinement (AMR). We present the results of two tests and a discussion on the accuracy and the performance of this method. We show that the UV screening affects mainly the dense parts of molecular clouds, changing locally the Jeans mass and therefore affecting the fragmentation.

Valdivia, Valeska

2013-07-01

148

Wavelet-based nearest-regularized subspace for noise-robust hyperspectral image classification

NASA Astrophysics Data System (ADS)

A wavelet-based nearest-regularized-subspace classifier is proposed for noise-robust hyperspectral image (HSI) classification. The nearest-regularized subspace, coupling the nearest-subspace classification with a distance-weighted Tikhonov regularization, was designed to only consider the original spectral bands. Recent research found that the multiscale wavelet features [e.g., extracted by redundant discrete wavelet transformation (RDWT)] of each hyperspectral pixel are potentially very useful and less sensitive to noise. An integration of wavelet-based features and the nearest-regularized-subspace classifier to improve the classification performance in noisy environments is proposed. Specifically, wealthy noise-robust features provided by RDWT based on hyperspectral spectrum are employed in a decision-fusion system or as preprocessing for the nearest-regularized-subspace (NRS) classifier. Improved performance of the proposed method over the conventional approaches, such as support vector machine, is shown by testing several HSIs. For example, the NRS classifier performed with an accuracy of 65.38% for the AVIRIS Indian Pines data with 75 training samples per class under noisy conditions (signal-to-noise ratio=36.87 dB), while the wavelet-based classifier can obtain an accuracy of 71.60%, resulting in an improvement of approximately 6%.

Li, Wei; Liu, Kui; Su, Hongjun

2014-01-01

149

Comparative study of different wavelet based neural network models for rainfall-runoff modeling

NASA Astrophysics Data System (ADS)

The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

2014-07-01

150

Wavelet-Based Real-Time Diagnosis of Complex Systems

NASA Technical Reports Server (NTRS)

A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

Gulati, Sandeep; Mackey, Ryan

2003-01-01

151

Hidden Markov models for wavelet-based blind source separation.

In this paper, we consider the problem of blind source separation in the wavelet domain. We propose a Bayesian estimation framework for the problem where different models of the wavelet coefficients are considered: the independent Gaussian mixture model, the hidden Markov tree model, and the contextual hidden Markov field model. For each of the three models, we give expressions of the posterior laws and propose appropriate Markov chain Monte Carlo algorithms in order to perform unsupervised joint blind separation of the sources and estimation of the mixing matrix and hyper parameters of the problem. Indeed, in order to achieve an efficient joint separation and denoising procedures in the case of high noise level in the data, a slight modification of the exposed models is presented: the Bernoulli-Gaussian mixture model, which is equivalent to a hard thresholding rule in denoising problems. A number of simulations are presented in order to highlight the performances of the aforementioned approach: 1) in both high and low signal-to-noise ratios and 2) comparing the results with respect to the choice of the wavelet basis decomposition. PMID:16830910

Ichir, Mahieddine M; Mohammad-Djafari, Ali

2006-07-01

152

Estimating Foreign-Object-Debris Density from Photogrammetry Data

NASA Technical Reports Server (NTRS)

Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

Long, Jason; Metzger, Philip; Lane, John

2013-01-01

153

Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

NASA Astrophysics Data System (ADS)

Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

Rastigejev, Y.

2011-12-01

154

NASA Astrophysics Data System (ADS)

In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as they circle the globe. Recently, it was demonstrated that the present global CTMs implemented on quasi-uniform grids are incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. On the contrary, the adaptive wavelet technique is shown to produce highly accurate numerical solutions at a relatively low computational cost. It is demonstrated that the developed WAMR method has significant advantages over conventional non-adaptive computational techniques in terms of accuracy and computational cost for calculations of atmospheric chemical transport numerical. The simulations show excellent ability of the algorithm to adapt the computational grid to a solution containing different scales at different spatial locations so as to produce accurate results at a relatively low computational cost. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

Rastigejev, Y.; Semakin, A. N.

2012-12-01

155

Change-in-ratio density estimator for feral pigs is less biased than closed mark–recapture estimates

Abstract. Closed-population capture–mark–recapture (CMR) methods,can produce,biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for

Laura B. Hanson; James B. Grand; Michael S. Mitchell; D. Buck Jolley; Bill D. Sparklin; Stephen S. Ditchkoff

2008-01-01

156

Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density

Laura B. HansonA; James B. GrandB; Michael S. MitchellC; D. Buck; Bill D. SparklinD; Stephen S. DitchkoffA

157

Current recommendations of the Adult Treatment Panel and Adolescents Treatment Panel of National Cholesterol Education Program make the low-density lipoprotein cholesterol (LDL-C) levels in serum the basis of classification and management of hypercholesterolemia. A number of direct homogenous assays based on surfactant/solubility principles have evolved in the recent past. This has made LDL-C estimation less cumbersome than the earlier used methods. Here we compared one of the direct homogenous assays with the widely used Friedewald's method of estimation of LDL-C to see the differences and correlation. We used direct homogenous assay kit to estimate serum LDL-C and high-density lipoprotein cholesterol (HDL-C). Serum Triglyceride (TG) and Total Cholesterol (TC) was estimated and using Friedewald's formula LDL-C was calculated. The LDL-C level obtained by both methods in 893 fasting serum samples were compared. The statistical methods used were paired t-test and Pearson's correlation.There was significant difference in the mean LDL-C levels obtained by the two methods at the TG levels <200 mg/dl (p<0.02) and TC levels >150 mg% (p<0.001). The correlation coefficient (r) between Friedewald's and direct assay estimation was 0.88. Friedewald's method classified 23.5 % of patients as high cardiac risk whereas there were 17.58% by direct assay.Both had good correlation even though the serum triglyceride and total cholesterol levels affect the difference in LDL-C estimated by both methods. Taking into account the cost and performance, Friedewald's method is as good or even better for classifying and managing patients. PMID:23105534

Sahu, Suchanda; Chawla, Rajinder; Uppal, Bharti

2005-07-01

158

Wavelets based algorithm for the evaluation of enhanced liver areas

NASA Astrophysics Data System (ADS)

Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. 63 computed tomography (CT) slices from 23 patients were assessed. Noncontrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.

Alvarez, Matheus; Rodrigues de Pina, Diana; Giacomini, Guilherme; Gomes Romeiro, Fernando; Barbosa Duarte, Sérgio; Yamashita, Seizo; de Arruda Miranda, José Ricardo

2014-03-01

159

Wavelet-based localization of oscillatory sources from magnetoencephalography data.

Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy. PMID:22410322

Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

2014-08-01

160

Learning multisensory integration and coordinate transformation via density estimation.

Sensory processing in the brain includes three key operations: multisensory integration-the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations-the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned-but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588

Makin, Joseph G; Fellows, Matthew R; Sabes, Philip N

2013-04-01

161

NASA Astrophysics Data System (ADS)

The objective of this study is to bring out the errors introduced during construction which are overlooked during the physical verification of the bridge. Such errors can be pointed out if the symmetry of the structure is challenged. This paper thus presents the study of downstream and upstream truss of newly constructed steel bridge using time-frequency and wavelet-based approach. The variation in the behavior of truss joints of bridge with variation in the vehicle speed has been worked out to determine their flexibility. The testing on the steel bridge was carried out with the same instrument setup on both the upstream and downstream trusses of the bridge at two different speeds with the same moving vehicle. The nodal flexibility investigation is carried out using power spectral density, short-time Fourier transform, and wavelet packet transform with respect to both the trusses and speed. The results obtained have shown that the joints of both upstream and downstream trusses of the bridge behave in a different manner even if designed for the same loading due to constructional variations and vehicle movement, in spite of the fact that the analytical models present a simplistic model for analysis and design. The difficulty of modal parameter extraction of the particular bridge under study increased with the increase in speed due to decreased excitation time.

Walia, Suresh Kumar; Patel, Raj Kumar; Vinayak, Hemant Kumar; Parti, Raman

2013-12-01

162

Estimation of density of mongooses with capture-recapture and distance sampling

We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

Corn, J. L.; Conroy, M. J.

1998-01-01

163

NASA Astrophysics Data System (ADS)

Hyperspectral images, which contain rich and fine spectral information, can be used to identify surface objects and improve land use/cover classification accuracy. Due to the property of high dimensionality of hyperspectral data, traditional statistics-based classifiers cannot be directly used on such images with limited training samples. This problem is referred as "curse of dimensionality". The commonly used method to solve this problem is dimensionality reduction, and feature extraction is used to reduce the dimensionality of hyperspectral images more frequently. There are two types of feature extraction methods. The first type is based on statistical property of data. The other type is based on time-frequency analysis. In this study, the time-frequency analysis methods are used to extract the features for hyperspectral image classification. Firstly, it has been proven that wavelet-based feature extraction provide an effective tool for spectral feature extraction. On the other hand, Hilbert-Huang transform (HHT), a relative new time-frequency analysis tool, has been widely used in nonlinear and nonstationary data analysis. In this study, wavelet transform and HHT are implemented on the hyperspectral data for physical spectral analysis. Therefore, we can get a small number of salient features, reduce the dimensionality of hyperspectral images and keep the accuracy of classification results. An AVIRIS data set is used to test the performance of the proposed HHT-based feature extraction methods; then, the results are compared with wavelet-based feature extraction. According to the experiment results, HHT-based feature extraction methods are effective tools and the results are similar with wavelet-based feature extraction methods.

Huang, X.-M.; Hsu, P.-H.

2012-07-01

164

A multirate DSP model for estimation of discrete probability density functions

The problem of estimating a probability density function from measurements has been widely stud- ied by many researchers. Even though much work has been done in the area of PDF estimation, most of it was focused on the continuous case. In this paper, we propose a new model-based approach for modeling and estimating discrete probability density functions, or probability mass

Byung-Jun Yoon; P. P. Vaidyanathan

2005-01-01

165

Serial identification of EEG patterns using adaptive wavelet-based analysis

NASA Astrophysics Data System (ADS)

A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

2013-10-01

166

The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109

Pipek, János; Nagy, Szilvia

2013-03-01

167

We have brought forward a wavelet-based algorithm for electroencephalograph (EEG) signals--using scale dependent threshold based on median. In comparison with the universal threshold and Sure threshold, our proposed threshold, which is adaptive to the subband noise signals, preserves the noise free reconstruction property and takes lower risk than does the universal threshold; and our proposed threshold overcomes the drawback of Sure threshold. Evidently, the scale dependent threshold based on median is computationally simple and can obtain higher singal-to-noise ratio (SNR) it outperforms the universal threshold and Sure threshlold. PMID:20095474

Jia, Aibin; Wang, Min; Liu, Fasheng; Bao, Chengyou; Zhang, Xiao

2009-12-01

168

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

169

National Technical Information Service (NTIS)

The goal of this proposed project is to develop an automated technique to assist radiologists in estimating mammographic breast density. During the project years, we developed an automated mammographic density segmentation system, referred to as Mammograp...

H. Chan

2006-01-01

170

This chapter discusses estimating the biomass density of forest vegetation. Data from inventories of tropical Asia and America were used to estimate biomass densities. Efforts to quantify forest disturbance suggest that population density, at subnational scales, can be used as a surrogate index to encompass all the anthropogenic activities (logging, slash-and-burn agriculture, grazing) that lead to degradation of tropical forest biomass density.

Brown, S.

1996-07-01

171

National Technical Information Service (NTIS)

The goal of this proposed project is to develop an automated technique to assist radiologists in estimating mammographic breast density. The computerized image analysis tool can provide a consistent and reproducible estimation of percent dense area on rou...

H. Chan

2004-01-01

172

Nonparametric Estimation of Mixed Partial Derivatives of a Multivariate Density

ERIC Educational Resources Information Center

A class of estimators which are asymptotically unbiased and mean square consistent are exhibited. Theorems giving necessary and sufficient conditions for uniform asymptotic unbiasedness and for mean square consistency are presented along with applications of the estimator to certain statistical problems. (Author/RC)

Singh, R. S.

1976-01-01

173

Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

Chen, Rongda; Wang, Ze

2013-01-01

174

A Density-Quantile Function Perspective on Robust Estimation.

National Technical Information Service (NTIS)

This paper provides an overview to a new general approach to statistical data analysis and parameter estimation which could be called the quantile function approach. The aims of descriptive statistics (to graphically summarize and display the data) are ob...

E. Parzen

1978-01-01

175

Consistency Properties of Nearest Neighbor Density Function Estimators

Let $X_1, X_2,\\\\cdots$ be $R^p$-valued random variables having unknown density function $f$. If $K$ is a density on the unit sphere in $R^p, \\\\{k(n)\\\\}$ a sequence of positive integers such that $k(n) \\\\rightarrow \\\\infty$ and $k(n) = o(n)$, and $R(k, z)$ is the distance from a point $z$ to the $k(n)$th nearest of $X_1,\\\\cdots, X_n$, then $f_n(z) = (nR(k, z)^p)^{-1}

David S. Moore; James W. Yackel

1977-01-01

176

On the effect of estimating the error density in nonparametric deconvolution

It is quite common in the statistical literature on nonparametric deconvolution to assume that the error density is perfectly known. Since this seems to be unrealistic in many practical applications, we study the effect of estimating the unknown error density. We derive minimax rates of convergence and propose a modification of the usual kernel-based estimation scheme, which takes the uncertainty

Michael H. Neumann; O. Hössjer

1997-01-01

177

Noise power spectral density estimation based on optimal smoothing and minimum statistics

We describe a method to estimate the power spectral density of nonstationary noise when a noisy speech signal is given. The method can be combined with any speech enhancement algorithm which requires a noise power spectral density estimate. In contrast to other methods, our approach does not use a voice activity detector. Instead it tracks spectral minima in each frequency

Rainer Martin

2001-01-01

178

In the last decade, new techniques have been introduced for analysis of nonstationary cardiovascular signal. Widespread used are the short time Fourier transform (STFT) and the autoregressive (AR) model. The present paper presents a wavelet method for noninvasive evaluation of autonomic status after pharmacological modulation by analyzing rats' electrocardiographic (ECG) signal transient phenomena. Some methods of preprocessing R-R signal are

G. Postolache; L. S. Carvalho; I. Rocha; O. Postolache; P. S. Girao

2003-01-01

179

Asymptotic equivalence of spectral density estimation and gaussian white noise

We consider the statistical experiment given by a sample of a stationary Gaussian process with an unknown smooth spectral density f. Asymptotic equivalence, in the sense of Le Cam's deficiency Delta-distance, to two Gaussian experiments with simpler structure is established. The first one is given by independent zero mean Gaussians with variance approximately the value of f in points of

Georgi K. Golubev; Michael Nussbaum; Harrison H. Zhou

2009-01-01

180

Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding.

National Technical Information Service (NTIS)

The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio be...

J. Hi S. Mahmoud

2012-01-01

181

Utilities as Random Variables: Density Estimation and Structure Discovery.

National Technical Information Service (NTIS)

Decision theory does not traditionally include uncertainty over utility functions. We argue that a person's utility value for a given outcome can be treated as we treat other domain attributes: as a random variable with a density function over its possibl...

U. Chajewska D. Koller

2000-01-01

182

This paper presents an automated scheme for breast density estimation on mammogram using statistical and boundary information. Breast density is regarded as a meaningful indicator for breast cancer risk, but measurement of breast density still relies on the qualitative judgment of radiologists. Therefore, we attempted to develop an automated system achieving objective and quantitative measurement. For preprocessing, we first segmented

Youngwoo Kim; Changwon Kim; Jong-Hyo Kim

2010-01-01

183

Nonparametric Density Estimation with Adaptive, Anisotropic Kernels for Human Motion Tracking

In this paper, we suggest to model priors on human motion by means of nonparametric kernel densities. Kernel densities avoid as- sumptions on the shape of the underlying distribution and let the data speak for themselves. In general, kernel density estimators suer from the problem known as the curse of dimensionality, i.e., the amount of data required to cover the

Thomas Brox; Bodo Rosenhahn; Daniel Cremers; Hans-peter Seidel

2007-01-01

184

Estimating densities of liquid transition-metals and Ni-base superalloys

To estimate the densities of liquid Ni-base superalloys, the densities and temperature coefficients of density (d?\\/dT) of the liquid transition-metals, which are used as alloy elements in Ni-base superalloys, were gathered, reviewed, and applied to a simple correlation. The correlation is particularly useful to estimate d?\\/dT of many transition-metals for which there are no data available. To demonstrate how the

P. K Sung; D. R Poirier; E McBride

1997-01-01

185

Volumetric breast density estimation from full-field digital mammograms

A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear

Saskia Van Engeland; Peter R. Snoeren; Henkjan Huisman; Carla Boetes; Nico Karssemeijer

2006-01-01

186

Design of wavelet-based ECG detector for implantable cardiac pacemakers.

A wavelet Electrocardiogram (ECG) detector for low-power implantable cardiac pacemakers is presented in this paper. The proposed wavelet-based ECG detector consists of a wavelet decomposer with wavelet filter banks, a QRS complex detector of hypothesis testing with wavelet-demodulated ECG signals, and a noise detector with zero-crossing points. In order to achieve high detection accuracy with low power consumption, a multi-scaled product algorithm and soft-threshold algorithm are efficiently exploited in our ECG detector implementation. Our algorithmic and architectural level approaches have been implemented and fabricated in a standard 0.35 ?m CMOS technology. The testchip including a low-power analog-to-digital converter (ADC) shows a low detection error-rate of 0.196% and low power consumption of 19.02 ?W with a 3 V supply voltage. PMID:23893202

Min, Young-Jae; Kim, Hoon-Ki; Kang, Yu-Ri; Kim, Gil-Su; Park, Jongsun; Kim, Soo-Won

2013-08-01

187

Bayesian Analysis of Mass Spectrometry Proteomics Data using Wavelet Based Functional Mixed Models

In this paper, we analyze MALDI-TOF mass spectrometry proteomic data using Bayesian wavelet-based functional mixed models. By modeling mass spectra as functions, this approach avoids reliance on peak detection methods. The flexibility of this framework in modeling non-parametric fixed and random effect functions enables it to model the effects of multiple factors simultaneously, allowing one to perform inference on multiple factors of interest using the same model fit, while adjusting for clinical or experimental covariates that may affect both the intensities and locations of peaks in the spectra. From the model output, we identify spectral regions that are differentially expressed across experimental conditions, while controlling the Bayesian FDR, in a way that takes both statistical and clinical significance into account. We apply this method to two cancer studies.

Morris, Jeffrey S.; Brown, Philip J.; Herrick, Richard C.; Baggerly, Keith A.; Coombes, Kevin R.

2008-01-01

188

Wavelet-based correlations of impedance cardiography signals and heart rate variability

NASA Astrophysics Data System (ADS)

The wavelet-based correlation analysis is employed to study impedance cardiography signals (variation in the impedance of the thorax z(t) and time derivative of the thoracic impedance (- dz/dt)) and heart rate variability (HRV). A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. The modulus of wavelet-correlation function shows the level of correlation, and the phase indicates the mean phase shift of oscillations at the given scale (frequency). Significant correlations essentially exceeding the values obtained for noise signals are defined within two spectral ranges, which correspond to respiratory activity (0.14-0.5 Hz), endothelial related metabolic activity and neuroendocrine rhythms (0.0095-0.02 Hz). Probably, the phase shift of oscillations in all frequency ranges is related to the peculiarities of parasympathetic and neuro-humoral regulation of a cardiovascular system.

Podtaev, Sergey; Dumler, Andrew; Stepanov, Rodion; Frick, Peter; Tziberkin, Kirill

2010-04-01

189

In this paper, we have applied an efficient wavelet-based approximation method for solving the Fisher's type and the fractional Fisher's type equations arising in biological sciences. To the best of our knowledge, until now there is no rigorous wavelet solution has been addressed for the Fisher's and fractional Fisher's equations. The highest derivative in the differential equation is expanded into Legendre series; this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the Fisher's equation and the fractional Fisher's equation are converted into a system of algebraic equations. Block-pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24908255

Rajaraman, R; Hariharan, G

2014-07-01

190

A Haar-wavelet-based Lucy-Richardson algorithm for positron emission tomography image restoration

NASA Astrophysics Data System (ADS)

Deconvolution is an ill-posed problem that requires regularization. Noise would inevitably be enhanced during the iterative deconvolution process. The enhanced noise degrades the image quality, causing mistakes in clinical interpretations. This paper introduced a Haar-wavelet-based Lucy-Richardson algorithm (HALU) for positron emission tomography (PET) image restoration based on a spatially variant point spread function. After wavelet decomposition, Lucy-Richardson algorithm was applied to each approximation matrix with different iteration numbers. Thus, this enhanced the contrasts of our images without amplifying much of the noise level. The results showed that HALU can be able to recover the resolution and yield better contrast and lower noise level than the Lucy-Richardson algorithm.

Tam, Naomi W. P.; Lee, Jhih-Shian; Hu, Chi-Min; Liu, Ren-Shyan; Chen, Jyh-Cheng

2011-08-01

191

A wavelet-based watermarking algorithm for ownership verification of digital images.

Access to multimedia data has become much easier due to the rapid growth of the Internet. While this is usually considered an improvement of everyday life, it also makes unauthorized copying and distributing of multimedia data much easier, therefore presenting a challenge in the field of copyright protection. Digital watermarking, which is inserting copyright information into the data, has been proposed to solve the problem. In this paper, we first discuss the features that a practical digital watermarking system for ownership verification requires. Besides perceptual invisibility and robustness, we claim that the private control of the watermark is also very important. Second, we present a novel wavelet-based watermarking algorithm. Experimental results and analysis are then given to demonstrate that the proposed algorithm is effective and can be used in a practical system. PMID:18244614

Wang, Yiwei; Doherty, John F; Van Dyck, Robert E

2002-01-01

192

Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development.

Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, Andre; Gileva, Olga; Argoul, Francoise; Naimark, Oleg; Arneodo, Alain

2014-01-01

193

An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

1998-11-01

194

We present a case study illustrating the challenges of analyzing accelerometer data taken from a sample of children participating in an intervention study designed to increase physical activity. An accelerometer is a small device worn on the hip that records the minute-by-minute activity levels of the child throughout the day for each day it is worn. The resulting data are irregular functions characterized by many peaks representing short bursts of intense activity. We model these data using the wavelet-based functional mixed model. This approach incorporates multiple fixed effect and random effect functions of arbitrary form, the estimates of which are adaptively regularized using wavelet shrinkage. The method yields posterior samples for all functional quantities of the model, which can be used to perform various types of Bayesian inference and prediction. In our case study, a high proportion of the daily activity profiles are incomplete, i.e. have some portion of the profile missing, so cannot be directly modeled using the previously described method. We present a new method for stochastically imputing the missing data that allows us to incorporate these incomplete profiles in our analysis. Our approach borrows strength from both the observed measurements within the incomplete profiles and from other profiles, from the same child as well as other children with similar covariate levels, while appropriately propagating the uncertainty of the imputation throughout all subsequent inference. We apply this method to our case study, revealing some interesting insights into children's activity patterns. We point out some strengths and limitations of using this approach to analyze accelerometer data. PMID:19169424

Morris, Jeffrey S; Arroyo, Cassandra; Coull, Brent A; Ryan, Louise M; Herrick, Richard; Gortmaker, Steven L

2006-12-01

195

High angular resolution diffusion imaging (HARDI) has become an important magnetic resonance technique for in vivo imaging. Current techniques for estimating the diffusion orientation distribution function (ODF), i.e., the probability density function of water diffusion along any direction, do not enforce the estimated ODF to be nonnegative or to sum up to one. Very often this leads to an estimated ODF which is not a proper probability density function. In addition, current methods do not enforce any spatial regularity of the data. In this paper, we propose an estimation method that naturally constrains the estimated ODF to be a proper probability density function and regularizes this estimate using spatial information. By making use of the spherical harmonic representation, we pose the ODF estimation problem as a convex optimization problem and propose a coordinate descent method that converges to the minimizer of the proposed cost function. We illustrate our approach with experiments on synthetic and real data. PMID:20426071

Goh, Alvina; Lenglet, Christophe; Thompson, Paul M; Vidal, René

2009-01-01

196

On the Choice of Smoothing Parameters for Parzen Estimators of Probability Density Functions

Parzen estimators are often used for nonparametric estimation of probability density functions. The smoothness of such an estimation is controlled by the smoothing parameter. A problem-dependent criterion for its value is proposed and illustrated by some examples. Especially in multimodal situations, this criterion led to good results.

Robert P. W. Duin

1976-01-01

197

Numerical estimate of infinite invariant densities: application to Pesin-type identity

NASA Astrophysics Data System (ADS)

Weakly chaotic maps with unstable fixed points are investigated in the regime where the invariant density is non-normalizable. We propose that the infinite invariant density \\overline{\\rho }(x) of these maps can be estimated using \\overline{\\rho }(x)={\\lim }_{t\\rightarrow \\infty }{t}^{1-\\alpha }\\rho (x,t), in agreement with earlier work of Thaler. Here ?(x,t) is the normalized density of particles. This definition uniquely determines the infinite density and is a valuable tool for numerical estimations. We use this density to estimate the sub-exponential separation ?? of nearby trajectories. For a particular map introduced by Thaler we use an analytical expression for the infinite invariant density to calculate ?? exactly, which perfectly matches simulations without fitting. Misunderstanding which recently appeared in the literature is removed.

Korabel, Nickolay; Barkai, Eli

2013-08-01

198

Probabilistic Analysis and Density Parameter Estimation Within Nessus

NASA Technical Reports Server (NTRS)

This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

2002-01-01

199

Probabilistic Analysis and Density Parameter Estimation Within Nessus

NASA Astrophysics Data System (ADS)

This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

Godines, Cody R.; Manteufel, Randall D.

2002-12-01

200

Radiation Pressure Detection and Density Estimate for 2011 MD

NASA Astrophysics Data System (ADS)

We present our astrometric observations of the small near-Earth object 2011 MD (H ~ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5?. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of \\rho = (640+/- 330) kg \\, m ^{-3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

Micheli, Marco; Tholen, David J.; Elliott, Garrett T.

2014-06-01

201

A comparison of 2 techniques for estimating deer density

We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

Robbins, C.S.

1977-01-01

202

The estimation of the gradient of a density function, with applications in pattern recognition

Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in ak-nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality

KEINOSUKE FUKUNAGA; LARRY D. HOSTETLER

1975-01-01

203

MCLUST: Software for Model-Based Clustering, Density Estimation and Discriminant Analysis.

National Technical Information Service (NTIS)

MCLUST is a software package for model-based clustering, density estimation and discriminant analysis interfaced to the S-PLUS commercial software. It implements parameterized Gaussian hierarchical clustering algorithms and the EM algorithm for parameteri...

A. E. Raftery C. Fraley

2002-01-01

204

Dimensionality reduction for density ratio estimation in high-dimensional spaces.

The ratio of two probability density functions is becoming a quantity of interest these days in the machine learning and data mining communities since it can be used for various data processing tasks such as non-stationarity adaptation, outlier detection, and feature selection. Recently, several methods have been developed for directly estimating the density ratio without going through density estimation and were shown to work well in various practical problems. However, these methods still perform rather poorly when the dimensionality of the data domain is high. In this paper, we propose to incorporate a dimensionality reduction scheme into a density-ratio estimation procedure and experimentally show that the estimation accuracy in high-dimensional cases can be improved. PMID:19631506

Sugiyama, Masashi; Kawanabe, Motoaki; Chui, Pui Ling

2010-01-01

205

Density Estimation with Confidence Sets Exemplified by Superclusters and Voids in the Galaxies

A method is presented for forming both a point estimate and a confidence set of semiparametric densities. The final product is a three-dimensional figure that displays a selection of density estimates for a plausible range of smoothing parameters. The boundaries of the smoothing parameter are determined by a nonparametric goodness-of-fit test that is based on the sample spacings. For each

Kathryn Roeder

1990-01-01

206

Density meter algorithm and system for estimating sampling/mixing uncertainty

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

Shine, E.P.

1986-01-01

207

Analytical form for a Bayesian wavelet estimator of images using the Bessel K form densities

A novel Bayesian nonparametric estimator in the wavelet domain is presented. In this approach, a prior model is imposed on the wavelet coefficients designed to capture the sparseness of the wavelet expansion. Seeking probability models for the marginal densities of the wavelet coefficients, the new family of Bessel K forms (BKF) densities are shown to fit very well to the

Mohamed-jalal Fadili; Larbi Boubchir

2005-01-01

208

Three Comparative Approaches for Breast Density Estimation in Digital and Screen Film Mammograms

In general, several factors are used for risk estimation in breast cancer detection and early prevention, and one of the important factors in risk of breast cancer is breast density. The mammography is important and effective adjunct in diagnosing the breast cancer. The radiologists would analyze visually the breast density with the BI-RADS lexicon on mammograms. However, this usually causes

Ruey-Feng Chang; Kuang-Che Chang-Chien; Etsuo Takada; Jasjit S. Suri; Woo Kyung Moon; J. H. K. Wu; Nariya Cho; Yi-Fa Wang; Dar-Ren Chen

2006-01-01

209

In this paper, a fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time

Yumin Zhang; Qing-Guo Wang; Kai-Yew Lum

2008-01-01

210

A fault diagnosis scheme for time-varying fault using output probability density estimation

In this paper, a fault diagnosis scheme for a class of time-varying faults using output probability density estimation is presented. The system studied is a nonlinear system with time delays. The measured output is viewed as a stochastic process and its probability density function (PDF) is modeled, which leads to a deterministic dynamical model including nonlinearities, uncertainties. The fault considered

Yumin Zhang; Qing-Guo Wang; Kai-Yew Lum

2008-01-01

211

Effect of size of superconductor on estimation of critical current density using ac inductive method

Applicability of the ac inductive method for estimation of the critical current density, Jc, in superconductors of relatively small size is theoretically investigated in terms of the Campbell model for pinning force density vs the displacement of fluxoids. It is shown that Jc is significantly overestimated by the conventional method due to reversible fluxoid motion for superconductors comparable to or

Nozomu Ohtani; Edmund S. Otabe; Teruo Matsushita; Baorong Ni

1992-01-01

212

Autocorrelation-based estimate of particle image density for diffraction limited particle images

NASA Astrophysics Data System (ADS)

In particle image velocimetry (PIV), the number of particle images per interrogation region, or particle image density, impacts the strength of the correlation and, as a result, the number of valid vectors and the measurement uncertainty. For some uncertainty methods, an a priori estimate of the uncertainty of PIV requires knowledge of the particle image density. An autocorrelation-based method for estimating the local, instantaneous, particle image density is presented. The method assumes that the particle images are diffraction limited and thus Gaussian in shape. Synthetic images are used to develop an empirical relationship between the autocorrelation peak magnitude and the particle image density, particle image diameter, particle image intensity, and interrogation region size. This relationship is tested using experimental images. The experimental results are compared to particle image densities obtained through implementing a local maximum method and are found to be more robust. The effect of varying particle image intensities was also investigated and is found to affect the measurement of the particle image density. Knowledge of the particle image density in PIV facilitates uncertainty estimation, and can alert the user that particle image density is too low or too high, even if these conditions are intermittent. This information can be used as a new vector validation criterion for PIV processing. In addition, use of this method is not limited to PIV, but it can be used to determine the density of any image with diffraction limited particle images.

Warner, Scott O.; Smith, Barton L.

2014-06-01

213

A bound for the smoothing parameter in certain well-known nonparametric density estimators

NASA Technical Reports Server (NTRS)

Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

Terrell, G. R.

1980-01-01

214

Item Response Theory with Estimation of the Latent Density Using Davidian Curves

ERIC Educational Resources Information Center

Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

Woods, Carol M.; Lin, Nan

2009-01-01

215

ERIC Educational Resources Information Center

The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

Woods, Carol M.; Thissen, David

2006-01-01

216

Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

2013-01-01

217

The analysis of surface EMG signals with the wavelet-based correlation dimension method.

Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240

Wang, Gang; Zhang, Yanyan; Wang, Jue

2014-01-01

218

A wavelet-based image quality metric for the assessment of 3D synthesized views

NASA Astrophysics Data System (ADS)

In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.

Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick

2013-03-01

219

A wavelet-based approach to detecting liveness in fingerprint scanners

NASA Astrophysics Data System (ADS)

In this work, a method to provide fingerprint vitality authentication, in order to improve vulnerability of fingerprint identification systems to spoofing is introduced. The method aims at detecting 'liveness' in fingerprint scanners by using the physiological phenomenon of perspiration. A wavelet based approach is used which concentrates on the changing coefficients using the zoom-in property of the wavelets. Multiresolution analysis and wavelet packet analysis are used to extract information from low frequency and high frequency content of the images respectively. Daubechies wavelet is designed and implemented to perform the wavelet analysis. A threshold is applied to the first difference of the information in all the sub-bands. The energy content of the changing coefficients is used as a quantified measure to perform the desired classification, as they reflect a perspiration pattern. A data set of approximately 30 live, 30 spoof, and 14 cadaver fingerprint images was divided with first half as a training data while the other half as the testing data. The proposed algorithm was applied to the training data set and was able to completely classify 'live' fingers from 'not live' fingers, thus providing a method for enhanced security and improved spoof protection.

Abhyankar, Aditya S.; Schuckers, Stephanie C.

2004-08-01

220

A new approach to pre-processing digital image for wavelet-based watermark

NASA Astrophysics Data System (ADS)

The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

Agreste, Santa; Andaloro, Guido

2008-11-01

221

NASA Astrophysics Data System (ADS)

Electrical Impedance Tomography is a soft-field tomography modality, where image reconstruction is formulated as a non-linear least-squares model fitting problem. The Newton-Rahson scheme is used for actually reconstructing the image, and this involves three main steps: forward solving, computation of the Jacobian, and the computation of the conductivity update. Forward solving relies typically on the finite element method, resulting in the solution of a sparse linear system. In typical three dimensional biomedical applications of EIT, like breast, prostate, or brain imaging, it is desirable to work with sufficiently fine meshes in order to properly capture the shape of the domain, of the electrodes, and to describe the resulting electric filed with accuracy. These requirements result in meshes with 100,000 nodes or more. The solution the resulting forward problems is computationally intensive. We address this aspect by speeding up the solution of the FEM linear system by the use of efficient numeric methods and of new hardware architectures. In particular, in terms of numeric methods, we solve the forward problem using the Conjugate Gradient method, with a wavelet-based algebraic multigrid (AMG) preconditioner. This preconditioner is faster to set up than other AMG preconditoiners which are not based on wavelets, it does use less memory, and provides for a faster convergence. We report results for a MATLAB based prototype algorithm an we discuss details of a work in progress for a GPU implementation.

Borsic, A.; Bayford, R.

2010-04-01

222

The Analysis of Surface EMG Signals with the Wavelet-Based Correlation Dimension Method

Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy.

Zhang, Yanyan; Wang, Jue

2014-01-01

223

Application of wavelet-based neural network on DNA microarray data

The advantage of using DNA microarray data when investigating human cancer gene expressions is its ability to generate enormous amount of information from a single assay in order to speed up the scientific evaluation process. The number of variables from the gene expression data coupled with comparably much less number of samples creates new challenges to scientists and statisticians. In particular, the problems include enormous degree of collinearity among genes expressions, likely violation of model assumptions as well as high level of noise with potential outliers. To deal with these problems, we propose a block wavelet shrinkage principal component (BWSPCA) analysis method to optimize the information during the noise reduction process. This paper firstly uses the National Cancer Institute database (NC160) as an illustration and shows a significant improvement in dimension reduction. Secondly we combine BWSPCA with an artificial neural network-based gene minimization strategy to establish a Block Wavelet-based Neural Network model in a robust and accurate cancer classification process (BWNN). Our extensive experiments on six public cancer datasets have shown that the method of BWNN for tumor classification performed well, especially on some difficult instances with large-class (more than two) expression data. This proposed method is extremely useful for data denoising and is competitiveness with respect to other methods such as BagBoost, RandomForest (RanFor), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN).

Lee, Jack; Zee, Benny

2008-01-01

224

Testing a wavelet based noise reduction method using computer-simulated mammograms

NASA Astrophysics Data System (ADS)

A wavelet based method of noise reduction has been tested for mammography using computer-simulated images for which the truth is known exactly. This method is based on comparing two images at different scales, using a cross-correlation-function as a measure of similarity to define the image modifications in the wavelet domain. The computer-simulated images were calculated for noise-free primary radiation using a quasi-realistic voxel phantom. Two images corresponding to slightly different geometry were produced. Gaussian noise was added with a mean value of zero and a standard deviation equal to 0.25% to 10% of the actual pixel value to simulate quantum noise with a certain level. The added noise could be reduced by more than 70% using the proposed method without any noticeable corruption of the structures for 4% added noise. The results indicate that it is possible to save 50% dose in mammography by producing two images (each 25% of the dose for a standard mammogram). Additionally, a reduction or even a removal of the anatomical noise might be possible and therefore better detection rates of breast cancer in mammography might be achievable.

Hoeschen, Christoph; Tischenko, Oleg; Dance, David R.; Hunt, Roger A.; Maidment, Andrew D. A.; Bakic, Predrag R.

2005-04-01

225

Performance evaluation of wavelet-based face verification on a PDA recorded database

NASA Astrophysics Data System (ADS)

The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

Sellahewa, Harin; Jassim, Sabah A.

2006-06-01

226

With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as Point-of-Care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by Body Sensor Networks (BSNs) from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level etc. and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data is being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the Percentage Residual Difference (PRD) and the Wavelet Weighted PRD (WWPRD). It is found that the proposed technique provides high security protection for patients data with low (less than 1% ) distortion and ECG data remains diagnosable after watermarking (i.e. hiding patient confidential data) and as well as after watermarks (i.e. hidden data) are removed from the watermarked data. PMID:23708767

Ibaida, Ayman; Khalil, Ibrahim

2013-05-21

227

NASA Astrophysics Data System (ADS)

In this paper, elastic wave propagation is studied in a nanocomposite reinforced with multiwall carbon nanotubes (CNTs). Analysis is performed on a representative volume element of square cross section. The frequency content of the exciting signal is at the terahertz level. Here, the composite is modeled as a higher order shear deformable beam using layerwise theory, to account for partial shear stress transfer between the CNTs and the matrix. The walls of the multiwall CNTs are considered to be connected throughout their length by distributed springs, whose stiffness is governed by the van der Waals force acting between the walls of nanotubes. The analyses in both the frequency and time domains are done using the wavelet-based spectral finite element method (WSFEM). The method uses the Daubechies wavelet basis approximation in time to reduce the governing PDE to a set of ODEs. These transformed ODEs are solved using a finite element (FE) technique by deriving an exact interpolating function in the transformed domain to obtain the exact dynamic stiffness matrix. Numerical analyses are performed to study the spectrum and dispersion relations for different matrix materials and also for different beam models. The effects of partial shear stress transfer between CNTs and matrix on the frequency response function (FRF) and the time response due to broadband impulse loading are investigated for different matrix materials. The simultaneous existence of four coupled propagating modes in a double-walled CNT-composite is also captured using modulated sinusoidal excitation.

Mitra, Mira; Gopalakrishnan, S.

2006-02-01

228

Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044

Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan

2014-05-01

229

Corrosion causes many failures in chemical process installations. These failures generate high costs, therefore an effective corrosion monitoring system obtrudes. This paper focuses on the classification of the most important corrosion processes: pitting, stress corrosion cracking (SCC) and general corrosion. The computations and algorithms involved in the classification of the corrosion time series are presented. A technique for trend removal

G. Van Dijck; M. Wevers; M. Van Hulle

230

Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities with the bed heights. PMID:24804024

Haque, Ekramul

2013-03-01

231

Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities with the bed heights.

Haque, Ekramul

2013-01-01

232

Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

2013-01-01

233

Multidimensional density estimation and phase-space structure of dark matter haloes

NASA Astrophysics Data System (ADS)

We present a method to numerically estimate the densities of a discretely sampled data based on a binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles, until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density and its shape provides an estimate of the variance. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We use this method to determine the appropriate metric at each point in space and then use kernel-based methods for calculating the density. The kernel-smoothed estimates were found to be more accurate and have lower dispersion. We apply this method to determine the phase-space densities of dark matter haloes obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function v(f) of phase-space density f does not have a constant slope but rather a small hump at high phase-space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of v(f) versus f relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behaviour as seen in simulations. The use of the presented method is not limited to calculation of phase-space densities, but can be used as a general purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.

Sharma, Sanjib; Steinmetz, Matthias

2006-12-01

234

Estimation of tiger densities in India using photographic captures and recaptures

Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

Karanth, U.; Nichols, J.D.

1998-01-01

235

Estimating detection and density of the Andean cat in the high Andes

The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

Reppucci, J.; Gardner, B.; Lucherini, M.

2011-01-01

236

We used electrophysiological signals recorded by CMOS Micro Electrode Arrays (MEAs) at high spatial resolution to estimate the functional-effective connectivity of sparse hippocampal neuronal networks in vitro by applying a cross-correlation (CC) based method and ad hoc developed spatio-temporal filtering. Low-density cultures were recorded by a recently introduced CMOS-MEA device providing simultaneous multi-site acquisition at high-spatial (21 ?m inter-electrode separation) as well as high-temporal resolution (8 kHz per channel). The method is applied to estimate functional connections in different cultures and it is refined by applying spatio-temporal filters that allow pruning of those functional connections not compatible with signal propagation. This approach permits to discriminate between possible causal influence and spurious co-activation, and to obtain detailed maps down to cellular resolution. Further, a thorough analysis of the links strength and time delays (i.e., amplitude and peak position of the CC function) allows characterizing the inferred interconnected networks and supports a possible discrimination of fast mono-synaptic propagations, and slow poly-synaptic pathways. By focusing on specific regions of interest we could observe and analyze microcircuits involving connections among a few cells. Finally, the use of the high-density MEA with low density cultures analyzed with the proposed approach enables to compare the inferred effective links with the network structure obtained by staining procedures. PMID:22516778

Maccione, Alessandro; Garofalo, Matteo; Nieus, Thierry; Tedesco, Mariateresa; Berdondini, Luca; Martinoia, Sergio

2012-06-15

237

Direct estimation of near-surface damping based on normalized energy density

NASA Astrophysics Data System (ADS)

We propose a direct estimation of dampings in surface layers based on the normalized energy density (NED). The ratio of the NED, defined as the NED for the uppermost layer divided by the NED for the basement, correlates well with the total damping, t_S^*. We apply the relation between the NED ratio and the total damping to estimate the total damping at an actual site, Katagihara (KTG) site in Japan. The total damping at the KTG site is directly estimated as t_S^*=0.038. This value agrees well with the estimated values determined from a conventional method, incorporating the non-linear inversion scheme.

Goto, Hiroyuki; Kawamura, Yuichi; Sawada, Sumio; Akazawa, Takashi

2013-07-01

238

Volumetric Breast Density Estimation from Full-Field Digital Mammograms: A Validation Study

Objectives To objectively evaluate automatic volumetric breast density assessment in Full-Field Digital Mammograms (FFDM) using measurements obtained from breast Magnetic Resonance Imaging (MRI). Material and Methods A commercially available method for volumetric breast density estimation on FFDM is evaluated by comparing volume estimates obtained from 186 FFDM exams including mediolateral oblique (MLO) and cranial-caudal (CC) views to objective reference standard measurements obtained from MRI. Results Volumetric measurements obtained from FFDM show high correlation with MRI data. Pearson’s correlation coefficients of 0.93, 0.97 and 0.85 were obtained for volumetric breast density, breast volume and fibroglandular tissue volume, respectively. Conclusions Accurate volumetric breast density assessment is feasible in Full-Field Digital Mammograms and has potential to be used in objective breast cancer risk models and personalized screening.

Gubern-Merida, Albert; Kallenberg, Michiel; Platel, Bram; Mann, Ritse M.; Marti, Robert; Karssemeijer, Nico

2014-01-01

239

Effects of tissue heterogeneity on the optical estimate of breast density

Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635–1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical.

Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

2012-01-01

240

Effects of tissue heterogeneity on the optical estimate of breast density.

Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635-1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

2012-10-01

241

A design-based approach to the estimation of plant density using point-to-plant sampling

A relationship between plant density and the probability density function of the squared point-to-plant distance is found\\u000a when a design-based approach is considered. The estimation of the probability density function (and consequently of plant\\u000a density) is performed using a boundary kernel estimator. Accordingly, by means of a simulation study, the performance of the\\u000a proposed estimator is evaluated with respect to

L. Barabesi

2001-01-01

242

The accurate quantitation of high density lipo- proteins has recently assumed greater importance in view of studies suggesting their negative correlation with coronary heart disease. High density lipoproteins may be estimated by measuring cholesterol in the plasma frac- tion of d > 1.063 g\\/ml. A more practical approach is the specific precipitation of apolipoprotein B (apoB)-contain- ing lipoproteins by sulfated

G. Russell Warnick; John J. Albers

243

Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250

Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès

2014-06-01

244

Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

NASA Astrophysics Data System (ADS)

The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

245

Kernel Smoothing Density Estimation when Group Membership is Subject to Missing

Density function is a fundamental concept in data analysis. Nonparametric methods including kernel smoothing estimate are available if the data is completely observed. However, in studies such as diagnostic studies following a two-stage design the membership of some of the subjects may be missing. Simply ignoring those subjects with unknown membership is valid only in the MCAR situation. In this paper, we consider kernel smoothing estimate of the density functions, using the inverse probability approaches to address the missing values. We illustrate the approaches with simulation studies and real study data in mental health.

Tang, Wan; He, Hua; Gunzler, Douglas

2011-01-01

246

Estimating the amount and distribution of radon flux density from the soil surface in China.

Based on an idealized model, both the annual and the seasonal radon ((222)Rn) flux densities from the soil surface at 1099 sites in China were estimated by linking a database of soil (226)Ra content and a global ecosystems database. Digital maps of the (222)Rn flux density in China were constructed in a spatial resolution of 25 km x 25 km by interpolation among the estimated data. An area-weighted annual average (222)Rn flux density from the soil surface across China was estimated to be 29.7+/-9.4 mBq m(-2)s(-1). Both regional and seasonal variations in the (222)Rn flux densities are significant in China. Annual average flux densities in the southeastern and northwestern China are generally higher than those in other regions of China, because of high soil (226)Ra content in the southeastern area and high soil aridity in the northwestern one. The seasonal average flux density is generally higher in summer/spring than winter, since relatively higher soil temperature and lower soil water saturation in summer/spring than other seasons are common in China. PMID:18329143

Zhuo, Weihai; Guo, Qiuju; Chen, Bo; Cheng, Guan

2008-07-01

247

Estimation of the High-Latitude Topside Heat Flux Using DMSP In Situ Plasma Densities

NASA Astrophysics Data System (ADS)

The high-latitude ionosphere interfaces with the hot, tenuous, magnetospheric plasma, and a heat flow into the ionosphere is expected, which has a large impact on the plasma densities and temperatures in the high-latitude ionosphere. The value of this magnetospheric heat flux is unknown. In an effort to estimate the value of the magnetospheric heat flux into the ionosphere and, and show its effect on the high-latitude plasma densities, we ran an ensemble of model runs using the Ionosphere Forecast Model (IFM) with different values of the heat flux through the upper boundary. These model runs included both auroral and solar heating. For each heat flux value, the plasma densities obtained from the model run at 840 Km were compared to the corresponding values measured by the DMSP F13 satellite. The heat flux value that gave the best compariosn between the measured and calculated plasma densities is considered to be the best estimated to the topside heat flux. The comparison was conducted for a one-year data set of the DMSP F13 measured plasma densities. In this paper, we show the effect of the topside heat flux on the plasma densities, and show realistic estimates for the topside heat flux values through the upper boundary of the high-latitude ionosphere.

Bekerat, H.; Schunk, R.; Scherliess, L.

2005-12-01

248

Nonparametric Bayesian density estimation on manifolds with applications to planar shapes

Summary Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback–Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.

Bhattacharya, Abhishek; Dunson, David B.

2010-01-01

249

New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. PMID:23205694

Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

2013-09-01

250

Hierarchical models for estimating density from DNA mark-recapture studies

Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

Gardner, B.; Royle, J.A.; Wegan, M.T.

2009-01-01

251

Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density

We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f0 = exp ?0 where ?0 is a concave function on ?. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ?) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of Hk, the “lower invelope” of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of ?0 = log f0 at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f0) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A.

2009-01-01

252

A hierarchical model for estimating density in camera-trap studies

1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

Royle, J.A.; Nichols, J.D.; Karanth, K. U.; Gopalaswamy, A.M.

2009-01-01

253

On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

NASA Astrophysics Data System (ADS)

Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

2014-05-01

254

Wavelet-based compression of medical images: filter-bank selection and evaluation.

Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

Saffor, A; bin Ramli, A R; Ng, K H

2003-06-01

255

We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA (P < 0.02). Furthermore, wfANOVA required significantly fewer (?¼×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision (r2 = 0.94 ± 0.08) and performed better than tANOVA across noise levels (P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power.

McKay, J. Lucas; Welch, Torrence D. J.; Vidakovic, Brani

2013-01-01

256

Item Response Theory With Estimation of the Latent Density Using Davidian Curves

Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated, simultaneously with the item parameters of logistic item response functions, as a Davidian curve. Simulations

Carol M. Woods; Nan Lin

2009-01-01

257

An improvement of td88 upper atmosphere density model and estimation of drag perturbations

In order to estimate the accuracy of satellite orbit predictions, perturbations of some orbital parameters were determined using three different atmospheric density models: CIRA86, NRLM-SIS00 and TD-88Ip. The drag coefficient applicable to a specific phase of the satellite flight was obtained by fitting an appropriate set of two-line orbital elements. The corresponding relative errors in the residual lifetime estimation and

Dusan Marceta; Stevo Segan

2010-01-01

258

Identification of the monitoring point density needed to reliably estimate contaminant mass fluxes

NASA Astrophysics Data System (ADS)

Plume monitoring frequently relies on the evaluation of point-scale measurements of concentration at observation wells which are located at control planes or `fences' perpendicular to groundwater flow. Depth-specific concentration values are used to estimate the total mass flux of individual contaminants through the fence. Results of this approach, which is based on spatial interpolation, obviously depend on the density of the measurement points. Our contribution relates the accurracy of mass flux estimation to the point density and, in particular, allows to identify a minimum point density needed to achieve a specified accurracy. In order to establish this relationship, concentration data from fences installed in the coal tar creosote plume at the Borden site are used. These fences are characterized by a rather high density of about 7 points/m2 and it is reasonable to assume that the true mass flux is obtained with this point density. This mass flux is then compared with results for less dense grids down to about 0.1points/m2. Mass flux estimates obtained for this range of point densities are analyzed by the moving window method in order to reduce purely random fluctuations. For each position of the moving window the mass flux is estimated and the coefficient of variation (CV) is calculated to quantify variablity of the results. Thus, the CV provides a relative measure of accurracy in the estimated fluxes. By applying this approach to the Borden naphthalene plume at different times, it is found that the point density changes from sufficient to insufficient due to the temporally decreasing mass flux. By comparing the results of naphthalene and phenol at the same fence and at the same time, we can see that the same grid density might be sufficient for one compound but not for another. If a rather strict CV criterion of 5% is used, a grid of 7 points/m2 is shown to allow for reliable estimates of the true mass fluxes only in the beginning of plume development when mass fluxes are high. Long-term data exhibit a very high variation being attributed to the decreasing flux and a much denser grid would be required to reflect the decreasing mass flux with the same high accurracy. However, a less strict CV criterion of 50% may be acceptable due to uncertainties generally associated with other hydrogeologic parameters. In this case, a point density between 1 and 2 points/m2 is found to be sufficient for a set of five tested chemicals.

Liedl, R.; Liu, S.; Fraser, M.; Barker, J.

2005-12-01

259

Estimating food portions. Influence of unit number, meal type and energy density.

Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7-26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled "meal", 4 "drink" and 16 "snack". Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p=0.01). Single unit foods and those labelled as 'meal' or 'beverage' were estimated with greater error than multi-unit and 'snack' foods (p=0.02 and p<0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30-46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948

Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A

2013-12-01

260

Estimating food portions. Influence of unit number, meal type and energy density????

Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7–26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled “meal”, 4 “drink” and 16 “snack”. Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p = 0.01). Single unit foods and those labelled as ‘meal’ or ‘beverage’ were estimated with greater error than multi-unit and ‘snack’ foods (p = 0.02 and p < 0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30–46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet.

Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A.

2013-01-01

261

NASA Astrophysics Data System (ADS)

Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

2014-03-01

262

Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images

NASA Astrophysics Data System (ADS)

Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.

Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.

2008-04-01

263

Wavelet-based SAR images despeckling using joint hidden Markov model

NASA Astrophysics Data System (ADS)

In the past few years, wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the deficiency for taking account of intrascale correlations that exist among neighboring wavelet coefficients. In this paper, we propose to develop a joint hidden Markov model by fusing the wavelet Bayesian denoising technique with an image regularization procedure based on HMT and Markov random field (MRF). The Expectation Maximization algorithm is used to estimate hyperparameters and specify the mixture model. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. It is shown that the joint method outperforms lee filter and standard HMT techniques in terms of the integrative measure of the equivalent number of looks (ENL) and Pratt's figure of merit(FOM), especially when dealing with speckle noise in large variance.

Li, Qiaoliang; Wang, Guoyou; Liu, Jianguo; Chen, Shaobo

2007-11-01

264

NASA Technical Reports Server (NTRS)

A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

Garber, Donald P.

1993-01-01

265

3D depth-to-basement and density contrast estimates using gravity and borehole data

We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in

V. C. Barbosa; C. M. Martins; J. B. Silva

2009-01-01

266

This study investigated whether surface hole counts could be used as a reliable estimate of density of the ghost shrimps Trypaea australiensis Dana 1852 and Biffarius arenosus Poore 1975 (Decapoda, Thalassinidea) in south eastern Australia. The relationship between the number of holes and the number\\u000a of ghost shrimps was explored in two ways. Resin casts were used to document any

Sarah Butler; Fiona L. Bird

2007-01-01

267

MMIS at ImageCLEF 2009: Non-parametric Density Estimation Algorithms

This paper presents the work of the MMIS group done at ImageCLEF 2009. We submitted ve dierent runs to the Photo Annotation task. These runs were based on two non-parametric density estimation models. The rst one evaluates a set of visual features and proposes a better, weighted set of features. The second approach uses keyword correlation to compute semantic similarity

Ainhoa Llorente; Suzanne Little; Stefan Ruger

2009-01-01

268

Effects of Transect Selection and Seasonality on Lemur Density Estimates in Southeastern Madagascar

I investigated how transect type (trails vs. cut transects) and seasonality influenced density estimates for 5 lemur taxa (Avahi laniger, Cheirogaleus major, Eulemur rubriventer, Hapalemur griseus griseus, and Microcebus rufus) in the Vohibola III Classified Forest in SE Madagascar. I surveyed tree height and diameter and lemur populations from June 1 to December 28, 2004 along 2 1250-m trails local people

Shawn M. Lehman

2006-01-01

269

Estimation and prediction of multiple flying balls using Probability Hypothesis Density filtering

We describe a method for estimating position and velocity of multiple flying balls for the purpose of robotic ball catching. For this a multi-target recursive Bayes filter, the Gaussian Mixture Probability Hypothesis Density filter (GM-PHD), fed by a circle detector is used. This recently developed filter avoids the need to enumerate all possible data association decisions, making them computationally efficient.

Oliver Birbach; Udo Frese

2011-01-01

270

In this note we derive a weighted non-linear least squares procedure for choosing the smoothing parameter in a Fourier approach to deconvolution of a density estimate. The method has the advantage over a previous procedure in that it is robust to the range of frequencies over which the model is fitted. A simulation study with different parametric forms for the

J. Barry; P. Diggle

1995-01-01

271

[Estimation of the size of bilayer liposome by optical density and refractive index].

A simple method for estimating a mean particles size in a suspension is suggested on the basis of an empirical correlation of a liposome size found by the optical shift method and on optic properties of the suspension. For liposome size characteristics only two parameters: optical density and refraction index increment are measured. PMID:6845444

Levchuk, Iu N; Volovik, Z N; Shcherbatskaia, N V

1983-01-01

272

A method developed recently by Grist et al. (2009) is used to obtain estimates of variability in the strength of the meridional overturning circulation (MOC) at various latitudes in the North Atlantic. The method employs water mass transformation theory to determine the surface buoyancy forced overturning circulation (SFOC) using surface density flux fields from both the Hadley Centre Coupled Model

Simon A. Josey; Jeremy P. Grist; Robert Marsh

2009-01-01

273

A wind energy analysis of Grenada: an estimation using the ‘Weibull’ density function

The Weibull density function has been used to estimate the wind energy potential in Grenada, West Indies. Based on historic recordings of mean hourly wind velocity this analysis shows the importance to incorporate the variation in wind energy potential during diurnal cycles. Wind energy assessments that are based on Weibull distribution using average daily\\/seasonal wind speeds fail to acknowledge that

D Weisser

2003-01-01

274

Comparison of four types of sampling gears for estimating age-0 yellow perch density

To aid biologists in obtaining reliable and efficient estimates of age-0 yellow perch (Perca flavescens) abundance, we compared operational effort and catch characteristics (i.e., density, length frequencies, and precision) of four gear types (beach seines, benthic sleds, drop nets, and push trawls) in littoral habitats in two South Dakota glacial lakes. Gear types were selected on the basis that the

Daniel J. Dembkowski; David W. Willis; Melissa R. Wuellner

2012-01-01

275

A hybrid approach to crowd density estimation using statistical leaning and texture classification

NASA Astrophysics Data System (ADS)

Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.

Li, Yin; Zhou, Bowen

2013-12-01

276

On Estimating Non-Uniform Density Distributions Using N Nearest Neighbors

NASA Astrophysics Data System (ADS)

We consider density estimators based on the nearest neighbors method applied to discrete point distributions in spaces of arbitrary dimensionality. If the density is constant, the volume of a hypersphere centered at a random location is proportional to the expected number of points falling within the hypersphere radius. The distance to the N-th nearest neighbor alone is then a sufficient statistic for the density. In the non-uniform case the proportionality is distorted. We model this distortion by normalizing hypersphere volumes to the largest one and expressing the resulting distribution in terms of the Legendre polynomials. Using Monte Carlo simulations we show that this approach can be used to effectively address the trade-off between smoothing bias and estimator variance for sparsely sampled distributions.

Wo?niak, P. R.; Kruszewski, A.

2012-12-01

277

Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

Robel, G. L.; Fisher, W. L.

1999-01-01

278

Estimation of the high-latitude topside electron heat flux using DMSP plasma density measurements

NASA Astrophysics Data System (ADS)

The high-latitude ionosphere interfaces with the hot, tenuous, magnetospheric plasma, and a heat flow into the ionosphere is expected, which has a large impact on the plasma densities and temperatures in the high-latitude ionosphere. The value of this magnetospheric heat flux is unknown. In an effort to estimate the value of the magnetospheric heat flux into the high-latitude ionosphere, and show its effect on the high-latitude ionospheric plasma densities, we ran an ensemble of model runs using the Ionosphere Forecast Model (IFM) with different values of the heat flux through the upper boundary. These model runs included heating from both auroral and solar sources. Then, for each heat flux value, the plasma densities obtained from the model runs, at 840 km, were compared to the corresponding values measured by the DMSP F13 satellite. The heat flux value that gave the best comparison between the measured and calculated plasma densities was considered to be the best estimate for the topside heat flux. The comparison was conducted for a 1-year data set of the DMSP F13 measured plasma densities (4300 consecutive orbits). Our systematic IFM/DMSP plasma density comparisons indicate that when a zero magnetospheric downward heat flux is assumed at the upper boundary of the IFM model, on the average, the IFM underestimates the measured plasma densities by a factor of 2. A good IFM/DMSP plasma density comparison was achieved for each month in 1998 when for each month a constant heat flux was assumed at the upper boundary of the model. For the 12-month period, the heat flux values that gave the best IFM/DMSP plasma density comparisons varied on the average from -0.5×1010 to -1.5×1010 eV cm-2 s-1.

Bekerat, Hamed A.; Schunk, Robert W.; Scherliess, Ludger

2007-07-01

279

Estimation of Plasma Mass Density Using Toroidal Oscillations Observed by CRRES

NASA Astrophysics Data System (ADS)

Plasma mass density is a quantity that is difficult to determine using particle instruments. However, with proper models of the ambient magnetic field and density variation along the field lines, one can relate the frequency of observed field line eigenoscillations to the density [Denton et al., JGR, page 29,925, 2001]. In the present study we estimate the density using the toroidal oscillations detected by the magnetic (B) and electric (E) field experiments on the CRRES spacecraft. A period including the geomagnetic storm of October 9,1990, is chosen for analysis because the spacecraft was located on the dawn sector where toroidal waves are routinely excited. Dynamic spectra of the B and E fields are generated for each CRRES orbit for identification of the presence and harmonic mode of the toroidal oscillations. The fundamental mode is found to the easiest to identify when the spacecraft was at magnetic latitude higher than 15 degrees. Consequently, we follow the frequency of this mode for the selected storm period. At L = 7, the frequency was 6 mHz before the storm and it decreased to 3 mHz a few days after the main phase of the storm. This frequency change corresponds to an increase of mass density by a factor of four. The electron number density at the same L shell, determined from the CRRES plasma wave spectra, did not show a similar change. This result suggests that heavy ions increased during the storm. We use the Denton et al. [2001] technique to estimate the mass density, and then combine the mass density and electron density to obtain the effective ion mass/charge ratio.

Takahashi, K.; Denton, R. E.; Anderson, R. R.; Hughes, W. J.

2002-12-01

280

Quantitative Analysis for Breast Density Estimation in Low Dose Chest CT Scans.

A computational method was developed for the measurement of breast density using chest computed tomography (CT) images and the correlation between that and mammographic density. Sixty-nine asymptomatic Asian women (138 breasts) were studied. With the marked lung area and pectoralis muscle line in a template slice, demons algorithm was applied to the consecutive CT slices for automatically generating the defined breast area. The breast area was then analyzed using fuzzy c-mean clustering to separate fibroglandular tissue from fat tissues. The fibroglandular clusters obtained from all CT slices were summed then divided by the summation of the total breast area to calculate the percent density for CT. The results were compared with the density estimated from mammographic images. For CT breast density, the coefficient of variations of intraoperator and interoperator measurement were 3.00 % (0.59 %-8.52 %) and 3.09 % (0.20 %-6.98 %), respectively. Breast density measured from CT (22?±?0.6 %) was lower than that of mammography (34?±?1.9 %) with Pearson correlation coefficient of r?=?0.88. The results suggested that breast density measured from chest CT images correlated well with that from mammography. Reproducible 3D information on breast density can be obtained with the proposed CT-based quantification methods. PMID:24643751

Moon, Woo Kyung; Lo, Chung-Ming; Goo, Jin Mo; Bae, Min Sun; Chang, Jung Min; Huang, Chiun-Sheng; Chen, Jeon-Hor; Ivanova, Violeta; Chang, Ruey-Feng

2014-03-01

281

Population density estimated from locations of individuals on a passive detector array

The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

2009-01-01

282

Surface estimates of the Atlantic overturning in density space in an eddy-permitting ocean model

NASA Astrophysics Data System (ADS)

A method to estimate the variability of the Atlantic meridional overturning circulation (AMOC) from surface observations is investigated using an eddy-permitting ocean-only model (ORCA-025). The approach is based on the estimate of dense water formation from surface density fluxes. Analysis using 78 years of two repeat forcing model runs reveals that the surface forcing-based estimate accounts for over 60% of the interannual AMOC variability in ?0 coordinates between 37°N and 51°N. The analysis provides correlations between surface-forced and actual overturning that exceed those obtained in an earlier analysis of a coarser resolution-coupled model. Our results indicate that, in accordance with theoretical considerations behind the method, it provides a better estimate of the overturning in density coordinates than in z coordinates in subpolar latitudes. By considering shorter segments of the model run, it is shown that correlations are particularly enhanced by the method's ability to capture large decadal scale AMOC fluctuations. The inclusion of the anomalous Ekman transport increases the amount of variance explained by an average 16% throughout the North Atlantic and provides the greatest potential for estimating the variability of the AMOC in density space between 33°N and 54°N. In that latitude range, 70-84% of the variance is explained and the root-mean-square difference is less than 1 Sv when the full run is considered.

Grist, Jeremy P.; Josey, Simon A.; Marsh, Robert

2012-06-01

283

The computational performance of two different variational quantum Monte Carlo estimators for both the electron and spin densities on top of nuclei are tested on a set of atomic systems containing also third-row species. Complications due to an unbounded variance present for both estimators are circumvented using appropriate sampling strategies. Our extension of a recently proposed estimator [Phys. Rev. A 69, 022701 (2004)] to deal with heavy fermionic systems appears to provide improved computational efficiency, at least an order of magnitude, with respect to alternative literature approaches for our test set. Given the importance of an adequate sampling of the core region in computing the electron density at a nucleus, a further reduction in the overall simulation cost is obtained by employing accelerated sampling algorithms. PMID:19045000

Håkansson, P; Mella, Massimo

2008-09-28

284

Efficient and robust quantum Monte Carlo estimate of the total and spin electron densities at nuclei

NASA Astrophysics Data System (ADS)

The computational performance of two different variational quantum Monte Carlo estimators for both the electron and spin densities on top of nuclei are tested on a set of atomic systems containing also third-row species. Complications due to an unbounded variance present for both estimators are circumvented using appropriate sampling strategies. Our extension of a recently proposed estimator [Phys. Rev. A 69, 022701 (2004)] to deal with heavy fermionic systems appears to provide improved computational efficiency, at least an order of magnitude, with respect to alternative literature approaches for our test set. Given the importance of an adequate sampling of the core region in computing the electron density at a nucleus, a further reduction in the overall simulation cost is obtained by employing accelerated sampling algorithms.

Ha?Kansson, P.; Mella, Massimo

2008-09-01

285

Reader Variability in Breast Density Estimation from Full-Field Digital Mammograms

Rationale and Objectives Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, “for processing”) or vendor postprocessed (ie, “for presentation”) digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual woman’s risk for breast cancer. Materials and Methods Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. Results The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Conclusion Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect.

Keller, Brad M.; Nathan, Diane L.; Gavenonis, Sara C.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

2013-01-01

286

NSDL National Science Digital Library

What is Density? Density is the amount of "stuff" in a given "space". In science terms that means the amount of "mass" per unit "volume". Using units that means the amount of "grams" per "centimeters cubed". Check out the following links and learn about density through song! Density Beatles Style Density Chipmunk Style Density Rap Enjoy! ...

Witcher, Miss

2011-10-06

287

Wavelet-based image registration technique for high-resolution remote sensing images

NASA Astrophysics Data System (ADS)

Image registration is the process of geometrically aligning one image to another image of the same scene taken from different viewpoints at different times or by different sensors. It is an important image processing procedure in remote sensing and has been studied by remote sensing image processing professionals for several decades. Nevertheless, it is still difficult to find an accurate, robust, and automatic image registration method, and most existing image registration methods are designed for a particular application. High-resolution remote sensing images have made it more convenient for professionals to study the Earth; however, they also create new challenges when traditional processing methods are used. In terms of image registration, a number of problems exist in the registration of high-resolution images: (1) the increased relief displacements, introduced by increasing the spatial resolution and lowering the altitude of the sensors, cause obvious geometric distortion in local areas where elevation variation exists; (2) precisely locating control points in high-resolution images is not as simple as in moderate-resolution images; (3) a large number of control points are required for a precise registration, which is a tedious and time-consuming process; and (4) high data volume often affects the processing speed in the image registration. Thus, the demand for an image registration approach that can reduce the above problems is growing. This study proposes a new image registration technique, which is based on the combination of feature-based matching (FBM) and area-based matching (ABM). A wavelet-based feature extraction technique and a normalized cross-correlation matching and relaxation-based image matching techniques are employed in this new method. Two pairs of data sets, one pair of IKONOS panchromatic images from different times and the other pair of images consisting of an IKONOS panchromatic image and a QuickBird multispectral image, are used to evaluate the proposed image registration algorithm. The experimental results show that the proposed algorithm can select sufficient control points semi-automatically to reduce the local distortions caused by local height variation, resulting in improved image registration results.

Hong, Gang; Zhang, Yun

2008-12-01

288

A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

NASA Astrophysics Data System (ADS)

The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.

Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

2014-05-01

289

Non-Gaussian probabilistic MEG source localisation based on kernel density estimation.

There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

Mohseni, Hamid R; Kringelbach, Morten L; Woolrich, Mark W; Baker, Adam; Aziz, Tipu Z; Probert-Smith, Penny

2014-02-15

290

Real-time wavelet-based inline banknote-in-bundle counting for cut-and-bundle machines

NASA Astrophysics Data System (ADS)

Automatic banknote sheet cut-and-bundle machines are widely used within the scope of banknote production. Beside the cutting-and-bundling, which is a mature technology, image-processing-based quality inspection for this type of machine is attractive. We present in this work a new real-time Touchless Counting and perspective cutting blade quality insurance system, based on a Color-CCD-Camera and a dual-core Computer, for cut-and-bundle applications in banknote production. The system, which applies Wavelet-based multi-scale filtering is able to count banknotes inside a 100-bundle within 200-300 ms depending on the window size.

Petker, Denis; Lohweg, Volker; Gillich, Eugen; Türke, Thomas; Willeke, Harald; Lochmüller, Jens; Schaede, Johannes

2011-02-01

291

Estimating density dependence in time-series of age-structured populations.

For a life history with age at maturity alpha, and stochasticity and density dependence in adult recruitment and mortality, we derive a linearized autoregressive equation with time-lags of from 1 to alpha years. Contrary to current interpretations, the coefficients for different time-lags in the autoregressive dynamics do not simply measure delayed density dependence, but also depend on life-history parameters. We define a new measure of total density dependence in a life history, D, as the negative elasticity of population growth rate per generation with respect to change in population size, D = - partial differential lnlambda(T)/partial differential lnN, where lambda is the asymptotic multiplicative growth rate per year, T is the generation time and N is adult population size. We show that D can be estimated from the sum of the autoregression coefficients. We estimated D in populations of six avian species for which life-history data and unusually long time-series of complete population censuses were available. Estimates of D were in the order of 1 or higher, indicating strong, statistically significant density dependence in four of the six species.

Lande, R; Engen, S; Saether, B-E

2002-01-01

292

NASA Astrophysics Data System (ADS)

This paper presents an automated scheme for breast density estimation on mammogram using statistical and boundary information. Breast density is regarded as a meaningful indicator for breast cancer risk, but measurement of breast density still relies on the qualitative judgment of radiologists. Therefore, we attempted to develop an automated system achieving objective and quantitative measurement. For preprocessing, we first segmented the breast region, performed contrast stretching, and applied median filtering. Then, two features were extracted: statistical information including standard deviation of fat and dense regions in breast area and boundary information which is the edge magnitude of a set of pixels with the same intensity. These features were calculated for each intensity level. By combining these features, the optimal threshold was determined which best divided the fat and dense regions. For evaluation purpose, 80 cases of Full-Field Digital Mammography (FFDM) taken in our institution were utilized. Two observers conducted the performance evaluation. The correlation coefficients of the threshold and percentage between human observer and automated estimation were 0.9580 and 0.9869 on average, respectively. These results suggest that the combination of statistic and boundary information is a promising method for automated breast density estimation.

Kim, Youngwoo; Kim, Changwon; Kim, Jong-Hyo

2010-03-01

293

Efficient sample density estimation by combining gridding and an optimized kernel.

The reconstruction of non-Cartesian k-space trajectories often requires the estimation of nonuniform sampling density. Particularly for 3D, this calculation can be computationally expensive. The method proposed in this work combines an iterative algorithm previously proposed by Pipe and Menon (Magn Reson Med 1999;41:179-186) with the optimal kernel design previously proposed by Johnson and Pipe (Magn Reson Med 2009;61:439-447). The proposed method shows substantial time reductions in estimating the densities of center-out trajectories, when compared with that of Johnson. It is demonstrated that, depending on the trajectory, the proposed method can provide reductions in execution time by factors of 12 to 85. The method is also shown to be robust in areas of high trajectory overlap, when compared with two analytical density estimation methods, producing a 10-fold increase in accuracy in one case. Initial conditions allow the proposed method to converge in fewer iterations and are shown to be flexible in terms of the accuracy of information supplied. The proposed method is not only one of the fastest and most accurate algorithms, it is also completely generic, allowing any arbitrary trajectory to be density compensated extemporaneously. The proposed method is also simple and can be implemented on parallel computing platforms in a straightforward manner. PMID:21688320

Zwart, Nicholas R; Johnson, Kenneth O; Pipe, James G

2012-03-01

294

Density estimation in a wolverine population using spatial capture-recapture models

Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.

2011-01-01

295

We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation.

Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

2010-01-01

296

Singular value decomposition and density estimation for filtering and analysis of gene expression

We present three algorithms for gene expression analysis. Algorithm 1, known as serial correlation test, is used for filtering out noisy gene expression profiles. Algorithm 2 and 3 project the gene expression profiles into 2-dimensional expression subspaces ident ifiecl by Singular Value Decomposition. Density estimates a e used to determine expression profiles that have a high correlation with the subspace and low levels of noise. High density regions in the projection, clusters of co-expressed genes, are identified. We illustrate the algorithms by application to the yeast cell-cycle data by Cho et.al. and comparison of the results.

Rechtsteiner, A. (Andreas); Gottardo, R. (Raphael); Rocha, L. M. (Luis Mateus); Wall, M. E. (Michael E.)

2003-01-01

297

Scatterer number density considerations in reference phantom-based attenuation estimation.

Attenuation estimation and imaging have the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue-mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and a signal-to-noise ratio (SNR) approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1647 regions of interest in five ex vivo bovine livers, we obtained an envelope SNR of 1.10 ± 0.12 when the tissue was imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article, we examine attenuation estimation in numerical phantoms, tissue-mimicking phantoms with variable SNDs and ex vivo bovine liver before and after thermal coagulation. We find that reference phantom-based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SNDs, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find that the standard deviation of attenuation slope estimates increases from 0.07 to 0.25 dB/cm-MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in tissue-mimicking phantoms with a large estimation kernel size (16 mm axially × 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (<0.01 dB/cm-MHz). We also compare results obtained with reference phantom-based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800

Rubert, Nicholas; Varghese, Tomy

2014-07-01

298

NASA Astrophysics Data System (ADS)

Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.

Erkyihun, S. T.

2013-12-01

299

NASA Astrophysics Data System (ADS)

Pyroclastic density current deposits remobilized by water during periods of heavy rainfall trigger lahars (volcanic mudflows) that affect inhabited areas at considerable distance from volcanoes, even years after an eruption. Here we present an innovative approach to detect and estimate the thickness and volume of pyroclastic density current (PDC) deposits as well as erosional versus depositional environments. We use SAR interferometry to compare an airborne digital surface model (DSM) acquired in 2004 to a post eruption 2010 DSM created using COSMO-SkyMed satellite data to estimate the volume of 2010 Merapi eruption PDC deposits along the Gendol river (Kali Gendol, KG). Results show PDC thicknesses of up to 75 m in canyons and a volume of about 40 × 106 m3, mainly along KG, and at distances of up to 16 km from the volcano summit. This volume estimate corresponds mainly to the 2010 pyroclastic deposits along the KG - material that is potentially available to produce lahars. Our volume estimate is approximately twice that estimated by field studies, a difference we consider acceptable given the uncertainties involved in both satellite- and field-based methods. Our technique can be used to rapidly evaluate volumes of PDC deposits at active volcanoes, in remote settings and where continuous activity may prevent field observations.

Bignami, Christian; Ruch, Joel; Chini, Marco; Neri, Marco; Buongiorno, Maria Fabrizia; Hidayati, Sri; Sayudi, Dewi Sri; Surono

2013-07-01

300

We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

Somershoe, S.G.; Twedt, D.J.; Reid, B.

2006-01-01

301

Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain Monte Carlo techniques, provides accurate approximations to estimate posterior marginals and avoid time-consuming sampling. We show here that two classical nonparametric smoothing problems, nonparametric regression and density estimation, can be achieved using INLA. Simulated examples and R functions are demonstrated to illustrate the use of the methods. Some discussions on potential applications of INLA are made in the paper.

Wang, Xiao-Feng

2013-01-01

302

Moment series for moment estimators of the parameters of a Weibull density

Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.

Bowman, K.O.; Shenton, L.R.

1982-01-01

303

Estimating the Galactic Coronal Density via Ram-Pressure Stripping from Dwarf Satellites

NASA Astrophysics Data System (ADS)

Cosmological simulations and theories of galaxy formation predict that the Milky Way should be embedded in an extended hot gaseous halo or corona. To date, a definitive detection of such a corona in the Milky Way remains elusive. We have attempted to estimate the density of the Milky Way's cosmological corona using the effect that it has on the surrounding population of dwarf galaxies. We have considered two dSphs close to the Galaxy: Sextans and Carina. Assuming that they have lost all their gas during the last pericentric passage via ram-pressure stripping, we were able to estimate the average density (n ˜ 2 ? 10-4 cm-3) of the corona at a distance of ˜ 70 kpc from the Milky Way. If we consider an isothermal profile and extrapolate it at large radii, the corona could contain a significant fraction of the missing baryons associated to the Milky Way.

Gatto, A.; Fraternali, F.; Marinacci, F.; Read, J.; Lux, H.

304

An estimator of the Orientation Probability Density Function (OPDF) of fiber tracts in the white matter of the brain from High Angular Resolution Diffusion data is presented. Unlike Q-Balls, which use the Funk-Radon transform to estimate the radial projection of the 3D Probability Density Function, the Jacobian of the spherical coordinates is included in the Funk-Radon approximation to the radial integral. Thus, true angular marginalizations are computed, which allows a strict probabilistic interpretation. Extensive experiments with both synthetic and real data show the better capability of our method to characterize complex micro-architectures compared to other related approaches (Q-Balls and Diffusion Orientation Transform), especially for low values of the diffusion weighting parameter. PMID:19393321

Tristán-Vega, Antonio; Westin, Carl-Fredrik; Aja-Fernández, Santiago

2009-08-15

305

A kernel density estimation based Bayesian classifier for celestial spectrum recognition

NASA Astrophysics Data System (ADS)

Celestial spectrum recognition is an indispensable part of any workable automated data processing system of celestial objects. Many methods have been proposed for spectra recognition, in which most of them concerned about feature extraction. In this paper, we present a Bayesian classifier based on Kernel Density Estimation (KDE) which is composed of the following two steps: In the first step, linear Principle Component Analysis (PCA) is used to extract features to decrease computational complexity and make the distribution of spectral data more compact and useful for classification. In the second step, namely classification step, KDE and Expectation Maximum (EM) algorithm are used to estimate class conditional density and the bandwidth of kernel function respectively. The experimental results show that the proposed method can achieve satisfactory performance over the real observational data of Sloan Digital Sky Survey (SDSS).

Yang, Jin-Fu; Li, Ming-Ai; Yu, Naigong

2009-10-01

306

NASA Astrophysics Data System (ADS)

The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

2013-05-01

307

In this study, we attempt to distinguish be- tween acute myeloid leukemia (AML) and acute lym- phoid leukemia (ALL) using microarray gene expres- sion data. Bayes' classification is used with three dif- ferent density estimation techniques: Parzen, k near- est neighbors(k-NN), and a new hybrid method, called k-neighborhood Parzen (k-NP), that combines proper- ties of the other two. The classifiers

Christopher A. Peters; Faramarz Valafar

2003-01-01

308

Cover estimation versus density counting in species-rich pasture under different grazing intensities

Two methods for monitoring of grassland vegetation were compared: visual estimation of plant cover (C) and plant densities\\u000a counting (D). C and D were performed in monthly intervals for three vegetation growing seasons after imposing different grazing\\u000a regimes on abandoned grassland in 1998. Species scores obtained from paired redundancy analyses (RDA) of C and D data were\\u000a compared and Spearman’s

V. V. Pavl?; M. Hejcman; J. Mikulka

2009-01-01

309

Multitarget state and track estimation for the probability hypotheses density filter

The particle Probability Hypotheses Density (particle-PHD) filter is a tractable approach for Random Finite Set (RFS) Bayes\\u000a estimation, but the particle-PHD filter can not directly derive the target track. Most existing approaches combine the data\\u000a association step to solve this problem. This paper proposes an algorithm which does not need the association step. Our basic\\u000a ideal is based on the

Weifeng Liu; Chongzhao Han; Feng Lian; Xiaobin Xu; Chenglin Wen

2009-01-01

310

Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

2011-01-01

311

Road-Based Surveys for Estimating Wild Turkey Density in the Texas Rolling Plains

Line-transect-based distance sampling has been used to estimate density of several wild bird species including wild turkeys (Meleagris gallopavo). We used inflatable turkey decoys during autumn (Aug-Nov) and winter (Dec-Mar) 2003-2005 at 3 study sites in the Texas Rolling Plains, USA, to simulate Rio Grande wild turkey (M. g. intermedia) flocks. We evaluated detectability of flocks using logistic regression models.

MATTHEW J. BUTLER; WARREN B. BALLARD; MARK C. WALLACE; STEPHEN J. DEMASO

2007-01-01

312

The purpose of this study was to investigate the motorunit conduction velocity (CV) as a function of frequency. A wavelet based correlation and coherence analysis was introduced to measure CV as a function of frequency. Based on the most simple assumption that the power spectra of the motor unit action potential is shifted to higher frequencies with increasing CV, we hypothesized that there would be a monotonic or linear trend of increasing CV with frequency. This trend was only confirmed at higher frequencies. At lower frequencies the trend was often reversed leading to a decrease in CV with increasing frequency. Thus the CV was high at low frequencies, went through a minimum at about 170 Hz and increased at higher frequencies, as expected. The observed CV at low frequencies could not be fully explained by assuming non-propagating signals or variable groups of motor units. We concluded that spectra and CV contain partly independent information about the muscles and that the wavelet based method provides the tools to measure them both simultaneously. PMID:20634091

von Tscharner, Vinzenz; Barandun, Marina

2010-12-01

313

NASA Astrophysics Data System (ADS)

Wavelet-based methods for multiple hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasize convergence between methods of wavelet thresholding or shrinkage and the problem of multiple hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on ensuring a trade off between type I probability error control and power dissipation. We describe a technique for controlling the false discovery rate at an arbitrary level of type 1 error in testing multiple wavelet coefficients generated by a 2D discrete wavelet transform (DWT) of spatial maps of {fMRI} time series statistics. We also describe and apply recursive testing methods that can be used to define a threshold unique to each level and orientation of the 2D-DWT. Bayesian methods, incorporating a formal model for the anticipated sparseness of wavelet coefficients representing the signal or true image, are also tractable. These methods are comparatively evaluated by analysis of "null" images (acquired with the subject at rest), in which case the number of positive tests should be exactly as predicted under the hull hypothesis, and an experimental dataset acquired from 5 normal volunteers during an event-related finger movement task. We show that all three wavelet-based methods of multiple hypothesis testing have good type 1 error control (the FDR method being most conservative) and generate plausible brain activation maps.

Fadili, Jalal M.; Bullmore, Edward T.

2003-11-01

314

The objective of this paper is to present a secure distribution method to distribute healthcare records (e.g. video streams and digitized image scans). The availability of prompt and expert medical care can meaningfully improve health care services in understaffed rural and remote areas, sharing of available facilities, and medical records referral. Here, a secure method is developed for distributing healthcare records, using a two-step wavelet based technique; first, a 2-level db8 wavelets transform for textual elimination, and later a 4-level db8 wavelets transform for digital watermarking. The first db8 wavelets are used to detect and eliminate textual information found on images for protecting data privacy and confidentiality. The second db8 wavelets are to secure and impose imperceptible marks to identify the owner; track authorized users, or detects malicious tampering of documents. Experiments were performed on different digitized image scans. The experimental results have illustrated that both wavelet-based methods are conceptually simple and able to effectively detect textual information while our watermark technique is robust to noise and compression. PMID:17282675

Yee Lau, Phooi; Ozawa, Shinji

2005-01-01

315

NSDL National Science Digital Library

What is density? Density is a relationship between mass (usually in grams or kilograms) and volume (usually in L, mL or cm 3 ). Below are several sights to help you further understand the concept of density. Click the following link to review the concept of density. Be sure to read each slide and watch each video: Chemistry Review: Density Watch the following video: Pop density video The following is a fun interactive sight you can use to review density. Your job is #1, to play and #2 to calculate the density of the ...

Hansen, Mr.

2010-10-26

316

Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions. PMID:24746022

McCabe, Patrick; Korb, Oliver; Cole, Jason

2014-05-27

317

NASA Astrophysics Data System (ADS)

A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

2012-06-01

318

In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

Balsa Terzic, Gabriele Bassi

2011-07-01

319

Examining the impact of the precision of address geocoding on estimated density of crime locations

NASA Astrophysics Data System (ADS)

This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.

Harada, Yutaka; Shimada, Takahito

2006-10-01

320

Estimation of Graphite Density and mechanical Strength of VHTR during Air-Ingress Accident

An air-ingress accident in a VHTR is anticipated to cause severe changes of graphite density and mechanical strength by oxidation process resulting in many side effects. However, the quantitative estimation has not been performed yet. In this study, the focus has been on the prediction of graphite density change and mechanical strength using a thermal hydraulic system analysis code. For analysis of the graphite density change, a simple graphite burn-off model was developed based on the similarity concept between parallel electrical circuit and graphite oxidation considering the overall changes of the graphite geometry and density. The developed model was implemented in the VHTR system analysis code, GAMMA, along with other comprehensive graphite oxidation models. As a reference reactor, GT-MHR 600 MWt reactor was selected. From the calculation, it was observed that the main oxidation process was derived 5.5 days after the accident following natural convection. The core maximum temperature reached up to 1400 C. However it never exceeded the maximum temperature criteria, 1600 C. According to the calculation results, the most oxidation occurs in the bottom reflector, so the exothermic heat generated by oxidation did not affect the core heat up. However, the oxidation process highly decreased the density of the bottom reflector making it vulnerable to mechanical stress. In fact, since the bottom reflector sustains the reactor core, the stress is highly concentrated on this part. The calculations were made for up to 11 days after the accident and 4.5% of density decrease was estimated resulting in 25% mechanical strength reduction.

Chang Oh; Eung Soo Kim; Hee Cheon No; Byung Jun Kim

2007-09-01

321

NASA Astrophysics Data System (ADS)

Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

Rastigejev, Y.; Semakin, A. N.

2013-12-01

322

Estimated carbon dioxide emissions from tropical deforestation improved by carbon-density maps

NASA Astrophysics Data System (ADS)

Deforestation contributes 6-17% of global anthropogenic CO2 emissions to the atmosphere. Large uncertainties in emission estimates arise from inadequate data on the carbon density of forests and the regional rates of deforestation. Consequently there is an urgent need for improved data sets that characterize the global distribution of aboveground biomass, especially in the tropics. Here we use multi-sensor satellite data to estimate aboveground live woody vegetation carbon density for pan-tropical ecosystems with unprecedented accuracy and spatial resolution. Results indicate that the total amount of carbon held in tropical woody vegetation is 228.7PgC, which is 21% higher than the amount reported in the Global Forest Resources Assessment 2010 (ref. ). At the national level, Brazil and Indonesia contain 35% of the total carbon stored in tropical forests and produce the largest emissions from forest loss. Combining estimates of aboveground carbon stocks with regional deforestation rates we estimate the total net emission of carbon from tropical deforestation and land use to be 1.0PgCyr-1 over the period 2000-2010--based on the carbon bookkeeping model. These new data sets of aboveground carbon stocks will enable tropical nations to meet their emissions reporting requirements (that is, United Nations Framework Convention on Climate Change Tier 3) with greater accuracy.

Baccini, A.; Goetz, S. J.; Walker, W. S.; Laporte, N. T.; Sun, M.; Sulla-Menashe, D.; Hackler, J.; Beck, P. S. A.; Dubayah, R.; Friedl, M. A.; Samanta, S.; Houghton, R. A.

2012-03-01

323

Optimal Diffusion MRI Acquisition for Fiber orientation Density Estimation: An Analytic Approach

An important challenge in the design of diffusion MRI experiments is how to optimize statistical efficiency, i.e., the accuracy with which parameters can be estimated from the diffusion data in a given amount of imaging time. In model-based spherical deconvolution analysis, the quantity of interest is the fiber orientation density (FOD). Here, we demonstrate how the spherical harmonics (SH) can be used to form an explicit analytic expression for the efficiency of the minimum variance (maximally efficient) linear unbiased estimator of the FOD. Using this expression, we calculate optimal b-values for maximum FOD estimation efficiency with SH expansion orders of L = 2, 4, 6, and 8 to be approximately b = 1500, 3000, 4600, and 6200 s/mm2, respectively. However, the arrangement of diffusion directions and scanner-specific hardware limitations also play a role in determining the realizable efficiency of the FOD estimator that can be achieved in practice. We show how some commonly used methods for selecting diffusion directions are sometimes inefficient, and propose a new method for selecting diffusion directions in MRI based on maximizing the statistical efficiency. We further demonstrate how scanner-specific hardware limitations generally lead to optimal b-values that are slightly lower than the ideal b-values. In summary, the analytic expression for the statistical efficiency of the unbiased FOD estimator provides important insight into the fundamental tradeoff between angular resolution, b-value, and FOD estimation accuracy.

White, Nathan S.; Dale, Anders M.

2012-01-01

324

NASA Astrophysics Data System (ADS)

The convective transfer of radionuclides by subsurface water from a geological repository of solidified high-level radioactive wastes (HLW) is considered. The repository is a cluster of wells of large diameter with HLW disposed of in the lower portions of the wells. The safe distance between wells as a function of rock properties and parameters of well loading with wastes has been estimated from mathematical modeling. A maximum permissible concentration of radionuclides in subsurface water near the ground surface above the repository is regarded as a necessary condition of safety. The estimates obtained show that well repositories allow for a higher density of solid HLW disposal than shaft storage facilities. Advantages and disadvantages of both types of storage facilities are considered in order to estimate the prospects for their use for underground disposal of solid HLW.

Malkovsky, V. I.; Pek, A. A.

2007-06-01

325

Validation tests of an improved kernel density estimation method for identifying disease clusters

NASA Astrophysics Data System (ADS)

The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra

2012-07-01

326

NASA Astrophysics Data System (ADS)

A simple technique, based on the interference principle, to obtain simultaneously the instantaneous electron density and temperature of ultra-short laser-excited semiconductor surface plasma is proposed and demonstrated. The interference of the incident laser and the surface plasmons forms nano-ripples on the surface. From the observed nano-ripple period, one can easily retrieve the density and temperature information. As a demonstration of the technique, the electron density and temperature are obtained for various band gap semiconductor materials based on the experimentally observed nano-ripples using 800 and 400 nm light in various ambient media and incident angles. The electron density estimated varied in the range of 2-10 and the corresponding electron temperature in the range 10-10 K, depending on the material band gap, the incident laser intensity, the ambient medium, the angle of incidence, and the laser wavelength. The information of the electron density and temperature is useful for choosing laser parameters (like fluence, wavelength, angle of incidence, ambient medium) and target materials (different band gap semiconductors) for obtaining a better size controllability of the nanostructure production. The information can also help one in obtaining essential plasma parameter inputs in the quest for understanding ultra-fast melting or understanding the pre-plasma conditions created by the pre-pulse of ultra-high intensity laser pulses.

Chakravarty, U.; Naik, P. A.; Chakera, J. A.; Upadhyay, A.; Gupta, P. D.

2014-06-01

327

NSDL National Science Digital Library

This page introduces students to the concept of density by presenting its definition, formula, and two blocks representing materials of different densities. Students are given the mass and volume of each block and asked to calculate the density. Their answers are then compared against a table of densities of common objects (air, wood, gold, etc.) and students must determine, using the density of the blocks, which substance makes up each block.

Carpi, Anthony

2003-01-01

328

We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

Crowe, D. E.; Longshore, K. M.

2010-01-01

329

Binomial sampling to estimate rust mite (Acari: Eriophyidae) densities on orange fruit.

Binomial sampling based on the proportion of samples infested was investigated for estimating mean densities of citrus rust mite, Phyllocoptruta oleivora (Ashmead), and Aculops pelekassi (Keifer) (Acari: Eriophyidae), on oranges, Citrus sinensis (L.) Osbeck. Data for the investigation were obtained by counting the number of motile mites within 600 sample units (each unit a 1-cm2 surface area per fruit) across a 4-ha block of trees (32 blocks total): five areas per 4 ha, five trees per area, 12 fruit per tree, and two samples per fruit. A significant (r2 = 0.89), linear relationship was found between ln(-ln(1 -Po)) and ln(mean), where P0 is the proportion of samples with more than zero mites. The fitted binomial parameters adequately described a validation data set from a sampling plan consisting of 192 samples. Projections indicated the fitted parameters would apply to sampling plans with as few as 48 samples, but reducing sample size resulted in an increase of bootstrap estimates falling outside expected confidence limits. Although mite count data fit the binomial model, confidence limits for mean arithmetic predictions increased dramatically as proportion of samples infested increased. Binomial sampling using a tally threshold of 0 therefore has less value when proportions of samples infested are large. Increasing the tally threshold to two mites marginally improved estimates at larger densities. Overall, binomial sampling for a general estimate of mite densities seemed to be a viable alternative to absolute counts of mites per sample for a grower using a low management threshold such as two or three mites per sample. PMID:17370833

Hall, David G; Childers, Carl C; Eger, Joseph E

2007-02-01

330

The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

Boersen, M. R.; Clark, J. D.; King, T. L.

2003-01-01

331

Adaptive bandwidth kernel density estimation for next-generation sequencing data

Background High-throughput sequencing experiments can be viewed as measuring some sort of a "genomic signal" that may represent a biological event such as the binding of a transcription factor to the genome, locations of chromatin modifications, or even a background or control condition. Numerous algorithms have been developed to extract different kinds of information from such data. However, there has been very little focus on the reconstruction of the genomic signal itself. Such reconstructions may be useful for a variety of purposes ranging from simple visualization of the signals to sophisticated comparison of different datasets. Methods Here, we propose that adaptive-bandwidth kernel density estimators are well-suited for genomic signal reconstructions. This class of estimators is a natural extension of the fixed-bandwidth estimators that have been employed in several existing ChIP-Seq analysis programs. Results Using a set of ChIP-Seq datasets from the ENCODE project, we show that adaptive-bandwidth estimators have greater accuracy at signal reconstruction compared to fixed-bandwidth estimators, and that they have significant advantages in terms of visualization as well. For both fixed and adaptive-bandwidth schemes, we demonstrate that smoothing parameters can be set automatically using a held-out set of tuning data. We also carry out a computational complexity analysis of the different schemes and confirm through experimentation that the necessary computations can be readily carried out on a modern workstation without any significant issues.

2013-01-01

332

On the method of logarithmic cumulants for parametric probability density function estimation.

Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

2013-10-01

333

Very little information is known of the recently described Microcebus tavaratra and Lepilemur milanoii in the Daraina region, a restricted area in far northern Madagascar. Since their forest habitat is highly fragmented and expected to undergo significant changes in the future, rapid surveys are essential to determine conservation priorities. Using both distance sampling and capture-recapture methods, we estimated population densities in two forest fragments. Our results are the first known density and population size estimates for both nocturnal species. In parallel, we compare density results from four different approaches, which are widely used to estimate lemur densities and population sizes throughout Madagascar. Four approaches (King, Kelker, Muller and Buckland) are based on transect surveys and distance sampling, and they differ from each other by the way the effective strip width is estimated. The fifth method relies on a capture-mark-recapture (CMR) approach. Overall, we found that the King method produced density estimates that were significantly higher than other methods, suggesting that it generates overestimates and hence overly optimistic estimates of population sizes in endangered species. The other three distance sampling methods provided similar estimates. These estimates were similar to those obtained with the CMR approach when enough recapture data were available. Given that Microcebus species are often trapped for genetic or behavioral studies, our results suggest that existing data can be used to provide estimates of population density for that species across Madagascar. PMID:22311681

Meyler, Samuel Viana; Salmona, Jordi; Ibouroi, Mohamed Thani; Besolo, Aubin; Rasolondraibe, Emmanuel; Radespiel, Ute; Rabarivola, Clément; Chikhi, Lounes

2012-05-01

334

Density-based load estimation using two-dimensional finite element models: a parametric study.

A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530

Bona, Max A; Martin, Larry D; Fischer, Kenneth J

2006-08-01

335

The Effects of Surfactants on the Estimation of Bacterial Density in Petroleum Samples

NASA Astrophysics Data System (ADS)

The effect of the surfactants polyoxyethylene monostearate (Tween 60), polyoxyethylene monooleate (Tween 80), cetyl trimethyl ammonium bromide (CTAB), and sodium dodecyl sulfate (SDS) on the estimation of bacterial density (sulfate-reducing bacteria [SRB] and general anaerobic bacteria [GAnB]) was examined in petroleum samples. Three different compositions of oil and water were selected to be representative of the real samples. The first one contained a high content of oil, the second one contained a medium content of oil, and the last one contained a low content of oil. The most probable number (MPN) was used to estimate the bacterial density. The results showed that the addition of surfactants did not improve the SRB quantification for the high or medium oil content in the petroleum samples. On other hand, Tween 60 and Tween 80 promoted a significant increase on the GAnB quantification at 0.01% or 0.03% m/v concentrations, respectively. CTAB increased SRB and GAnB estimation for the sample with a low oil content at 0.00005% and 0.0001% m/v, respectively.

Luna, Aderval Severino; da Costa, Antonio Carlos Augusto; Gonçalves, Márcia Monteiro Machado; de Almeida, Kelly Yaeko Miyashiro

336

Estimation of graphite density and mechanical strength variation of VHTR during air-ingress accident

An air-ingress accident in a Very High Temperature Gas-Cooled Reactor (VHTR) is anticipated to cause severe changes to graphite density and mechanical strength by an oxidation process that has many side effects. However, quantitative estimations have not yet been performed. This study focuses on predicting the changes in graphite density and mechanical strength via thermal hydraulic system analysis code. In order to analyze the change in graphite density, a simple graphite burn-off model was developed. The model is based on the similarities between a parallel electrical circuit and graphite oxidation. It was used to determine overall changes in the graphite’s geometry and density. The model was validated by comparing its results to experimental data that was obtained for several temperatures. In the experiment, cylindrically shaped graphite specimens were oxidized in an electrical furnace and the variations of its mass were measured against time. The experiment’s range covered temperatures between 6000C and 9000 C. Experimental data validated the model’s accuracy. Finally, the developed model along with other comprehensive graphite oxidation models was integrated into the VHTR system analysis code, GAMMA. GT-MHR 600 MWt reactor was selected as a reference reactor. Based on the calculation, the main oxidation process was observed 5.5 days after the accident when followed by natural convection. The core maximum temperature reached 16000 C, but never exceeded the maximum temperature criteria, 18000 C. However, the oxidation process did significantly decrease the density of bottom reflector, making it vulnerable to mechanical stress. The stress on the bottom reflector is greatly increased because it sustains the reactor core. The calculation proceeded until 11 days after the accident, resulting in an observed 4.5% decrease in density and a 25% reduction of mechanical strength.

Eung Soo Kim

2008-04-01

337

Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently. PMID:23167398

Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

2012-01-01

338

NASA Astrophysics Data System (ADS)

In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzi?, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

Terzi?, Balša; Bassi, Gabriele

2011-07-01

339

In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)G. Bassi and B. Terzic, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

Terzic, B.; Bassi, G.

2011-07-08

340

Axonal and dendritic density field estimation from incomplete single-slice neuronal reconstructions

Neuronal information processing in cortical networks critically depends on the organization of synaptic connectivity. Synaptic connections can form when axons and dendrites come in close proximity of each other. The spatial innervation of neuronal arborizations can be described by their axonal and dendritic density fields. Recently we showed that potential locations of synapses between neurons can be estimated from their overlapping axonal and dendritic density fields. However, deriving density fields from single-slice neuronal reconstructions is hampered by incompleteness because of cut branches. Here, we describe a method for recovering the lost axonal and dendritic mass. This so-called completion method is based on an estimation of the mass inside the slice and an extrapolation to the space outside the slice, assuming axial symmetry in the mass distribution. We validated the method using a set of neurons generated with our NETMORPH simulator. The model-generated neurons were artificially sliced and subsequently recovered by the completion method. Depending on slice thickness and arbor extent, branches that have lost their outside parents (orphan branches) may occur inside the slice. Not connected anymore to the contiguous structure of the sliced neuron, orphan branches result in an underestimation of neurite mass. For 300 ?m thick slices, however, the validation showed a full recovery of dendritic and an almost full recovery of axonal mass. The completion method was applied to three experimental data sets of reconstructed rat cortical L2/3 pyramidal neurons. The results showed that in 300 ?m thick slices intracortical axons lost about 50% and dendrites about 16% of their mass. The completion method can be applied to single-slice reconstructions as long as axial symmetry can be assumed in the mass distribution. This opens up the possibility of using incomplete neuronal reconstructions from open-access data bases to determine population mean mass density fields.

van Pelt, Jaap; van Ooyen, Arjen; Uylings, Harry B. M.

2014-01-01

341

A reliable simple method to estimate density of nitroaliphatics, nitrate esters and nitramines.

In this work, a new simple method is presented to estimate crystal density of three important classes of explosives including nitroalphatics, nitrate esters and nitramines. This method allows reliable prediction of detonation performance for the above compounds. It uses a new general correlation containing important explosive parameters such as the number of carbon, hydrogen, nitrogen and two other structural parameters. The predicted results are compared to the results of best available methods for different family of energetic compounds. This method is also tested for various explosives with complex molecular structures. It is shown that the predicted results are more reliable with respect to the best well-developed simple methods. PMID:19442437

Keshavarz, Mohammad Hossein; Pouretedal, Hamid Reza

2009-09-30

342

Probability density function estimation of laser light scintillation via Bayesian mixtures.

A method for probability density function (PDF) estimation using Bayesian mixtures of weighted gamma distributions, called the Dirichlet process gamma mixture model (DP-GaMM), is presented and applied to the analysis of a laser beam in turbulence. The problem is cast in a Bayesian setting, with the mixture model itself treated as random process. A stick-breaking interpretation of the Dirichlet process is employed as the prior distribution over the random mixture model. The number and underlying parameters of the gamma distribution mixture components as well as the associated mixture weights are learned directly from the data during model inference. A hybrid Metropolis-Hastings and Gibbs sampling parameter inference algorithm is developed and presented in its entirety. Results on several sets of controlled data are shown, and comparisons of PDF estimation fidelity are conducted with favorable results. PMID:24690656

Wang, Eric X; Avramov-Zamurovic, Svetlana; Watkins, Richard J; Nelson, Charles; Malek-Madani, Reza

2014-03-01

343

A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

NASA Astrophysics Data System (ADS)

This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

2014-09-01

344

NASA Astrophysics Data System (ADS)

This paper presents a wavelet-based multifractal approach to characterize the statistical properties of temporal distribution of the 1982-2012 seismic activity in Mammoth Mountain volcano. The fractal analysis of time-occurrence series of seismicity has been carried out in relation to seismic swarm in association with magmatic intrusion happening beneath the volcano on 4 May 1989. We used the wavelet transform modulus maxima based multifractal formalism to get the multifractal characteristics of seismicity before, during, and after the unrest. The results revealed that the earthquake sequences across the study area show time-scaling features. It is clearly perceived that the multifractal characteristics are not constant in different periods and there are differences among the seismicity sequences. The attributes of singularity spectrum have been utilized to determine the complexity of seismicity for each period. Findings show that the temporal distribution of earthquakes for swarm period was simpler with respect to pre- and post-swarm periods.

Zamani, Ahmad; Kolahi Azar, Amir Pirouz; Safavi, Ali Akbar

2014-06-01

345

Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

NASA Astrophysics Data System (ADS)

The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

2013-12-01

346

In this paper, a wavelet-based approximation method is introduced for solving the Newell-Whitehead (NW) and Allen-Cahn (AC) equations. To the best of our knowledge, until now there is no rigorous Legendre wavelets solution has been reported for the NW and AC equations. The highest derivative in the differential equation is expanded into Legendre series, this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the aforesaid equations are converted into an algebraic system. Block pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24599524

Hariharan, G

2014-05-01

347

NASA Astrophysics Data System (ADS)

The real-time flat panel detector-based cone beam CT breast imaging (FPD-CBCTBI) has attracted increasing attention for its merits of early detection of small breast cancerous tumors, 3-D diagnosis, and treatment planning with glandular dose levels not exceeding those of conventional film-screen mammography. In this research, our motivation is to further reduce the x-ray exposure level for the cone beam CT scan while retaining acceptable image quality for medical diagnosis by applying efficient denoising techniques. In this paper, the wavelet-based multiscale anisotropic diffusion algorithm is applied for cone beam CT breast imaging denoising. Experimental results demonstrate that the denoising algorithm is very efficient for cone bean CT breast imaging for noise reduction and edge preservation. The denoising results indicate that in clinical applications of the cone beam CT breast imaging, the patient"s radiation dose can be reduced by up to 60% while obtaining acceptable image quality for diagnosis.

Zhong, Junmei; Ning, Ruola; Conover, David L.

2004-05-01

348

NASA Technical Reports Server (NTRS)

Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

Matic, Roy M.; Mosley, Judith I.

1994-01-01

349

Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

NASA Technical Reports Server (NTRS)

Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

Simon, Dan; Simon, Donald L.

2006-01-01

350

An effective wavelet based multigrid preconditioned conjugate gradient method is developed to solve electromagnetic large matrix problem for millimeter wave scattering application. By using wavelet transformation we restrict the large matrix equation to a relative smaller matrix and which can be solved rapidly. The solution is prolonged as the new improvement for the conjugate gradient (CG) method. Numerical result shows

R. S. Chen; D. G. Fang; K. F. Tsang; E.K.N. Yung

2000-01-01

351

Extracting reliable image edge information is cru- cial for active contour models as well as vascular segmenta- tion in magnetic resonance angiography (MRA). However, conventional edge detection techniques, such as gradient- based methods and wavelet-based methods, are incapable of returning reliable detection responses from low contrast edges in the images. In this paper, we propose a novel edge detection method

Zhenyu He; Albert C. S. Chung

2010-01-01

352

Enhanced State Estimation using Multiscale Kalman Filtering

Multiscale wavelet-based representation of data has shown great noise removal abilities when used in data filtering. In this paper, a multiscale Kalman filtering (MSKF) algorithm is developed, in which the filtering advantages of multiscale representation are combined with those of the Kalman filter to further enhance its estimation performance. The MSKF algorithm relies on representing the data at multiple scales

M. N. Nounou

2006-01-01

353

Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar.

Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.

2012-01-01

354

\\u000a We compare a volumetric versus an area-based breast density estimation method in digital mammography. Bilateral images from\\u000a 71 asymptomatic women were analyzed. Volumetric density was measured using Quantra\\u000a \\u000a TM\\u000a (Hologic Inc.). Area-based density was estimated using Cumulus (Ver. 4.0, Univ. Toronto). Correlation and regression analysis was performed to determine the association between i) density from left versus right breasts and

Despina Kontos; Predrag R. Bakic; Raymond J. Acciavatti; Emily F. Conant; Andrew D. A. Maidment

2010-01-01

355

NSDL National Science Digital Library

This web page introduces the concepts of density and buoyancy. The discovery in ancient Greece by Archimedes is described. The densities of various materials are given and temperature effects introduced. Links are provided to news and other resources related to mass density. This is part of the Vision Learning collection of short online modules covering topics in a broad range of science and math topics.

Day, Martha M.

2008-05-26

356

Two methods for monitoring of grassland vegetation were compared: visual estimation of plant cover (C) and plant densities counting (D). C and D were performed in monthly intervals for three vegetation growing seasons after imposing different grazing regimes on abandoned grassland in 1998. Species scores obtained from paired redundancy analyses (RDA) of C and D data were compared and Spearman's rank correlations were used to show if the two methods give comparable results. Results of C and D were highly correlated in the first two growing seasons only. In the third season, correlation was substantially lower as the sward structure was more heterogeneous due to creation of differently defoliated patches especially under extensive grazing. Presence of the same plant species with different habit in frequently and in infrequently grazed patches, reduced significance of Spearman's rank correlations. Cover estimation can fully substitute plant density counting in grassland with lower proportion of frequently and infrequently grazed patches only, but caution should be used when comparing different management regimes in long term analyses. PMID:18787965

Pavl?, V V; Hejcman, M; Mikulka, J

2009-09-01

357

Nosocomial infections (NIs) - those acquired in health care settings - represent one of the major causes of increased mortality in hospitalized patients. As they are a real problem for both patients and health authorities, the development of an effective surveillance system to monitor and detect them is of paramount importance. This paper presents a retrospective analysis of a prevalence survey of NIs done in the Geneva University Hospital. The objective is to identify patients with one or more NIs based on clinical and other data collected during the survey. In this classification task, the main difficulty lies in the significant imbalance between positive and negative cases. To overcome this problem, we investigate one-class Parzen density estimator which can be trained to differentiate two classes taking examples from a single class. The results obtained are encouraging: whereas standard 2-class SVMs scored a baseline sensitivity of 50.6% on this problem, the one-class approach increased sensitivity to as much as 88.6%. These results suggest that one-class Parzen density estimator can provide an effective and efficient way of overcoming data imbalance in classification problems. PMID:18487702

Cohen, Gilles; Sax, Hugo; Geissbuhler, Antoine

2008-01-01

358

Effect of bone density on body composition estimates in young adult black and white women.

Bone mineral content (BMC) and density (BMD) by dual x-ray absorptiometry, total body water (TBW) by the deuterium oxide (D2O) dilution technique, and body density (Bd) by hydrostatic weighing were measured in 26 black (B) and 26 white (W) young adult women. Both groups were similar in age, height, weight, and total skinfolds; however, black subjects had significantly higher BMC and BMD. Formulas to estimate percent body fat (%BF) from Bd included Siri's two-component equation for the reference man, which assumes a fat free body density (FFBd) of 1.100 g.ml-1, and an adjusted two-component formula that assumes a lower FFBd of 1.095 g.ml-1. Percent body fat was also predicted from TBW and by several multicomponent models that corrected for individual subject variation in measured BMC and TBW. The two groups did not differ significantly in %BF predictions by any of the methods. However, the difference in %BF between the groups was halved with the four-component model (B = 21.9%; W = 23.6%) as compared with that calculated from the Siri two-component densitometric model (B = 21.2%; W = 24.2%). Within each racial group, %BF was not significantly different when predicted by two-component or multicomponent models. However, %BF of individuals with the highest and lowest BMD was substantially under- and overpredicted, respectively, by Siri's equation.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8450735

Côté, K D; Adams, W C

1993-02-01

359

NASA Astrophysics Data System (ADS)

In the open ocean, sea level variability is primarily steric in origin. Steric sea level is given by the depth integral of the density field, raising the question of how tide gauges, which are situated in very shallow water, feel deep ocean variability. Here this question is examined in a high-resolution global ocean model. By considering a series of assumptions we show that if we wish to reconstruct coastal sea level using only local density information, then the best assumption we can make is one of no horizontal pressure gradient, and therefore no geostrophic flow, at the seafloor. Coastal sea level can then be determined using density at the ocean's floor. When attempting to discriminate between mass and volume components of sea level measured by tide gauges, the conventional approach is to take steric height at deep-ocean sites close to the tide gauges as an estimate of the steric component. We find that with steric height computed at 3000 m this approach only works well in the equatorial band of the Atlantic and Pacific eastern boundaries. In most cases the steric correction can be improved by calculating steric height closer to shore, with the best results obtained in the depth range 500-1000 m. Yet, for western boundaries, large discrepancies remain. Our results therefore suggest that on time scales up to about 5 years, and perhaps longer, the presence of boundary currents means that the conventional steric correction to tide gauges may not be valid in many places.

Bingham, R. J.; Hughes, C. W.

2012-01-01

360

NASA Astrophysics Data System (ADS)

We compare electron density predictions of the International Reference Ionosphere (IRI-2007) model with in situ measurements of the satellites CHAMP and GRACE for the years from 2005 to 2010 over the subauroral regions. The electron density between 58° and 68° Mlat are considered. The trough region Ne peaks during local summers and attain the valley during local winter. Around -100°E and 60°E, two larger electron density sectors features can be seen in both hemispheres during all three seasons, which attributed to the electron extending from middle latitude to trough region. From 2005 to the beginning of 2010, the model overestimates the trough region Ne by 20% on average and the decrease of Ne in this region can also be seen during the last solar minimum. In the southern hemisphere, the model prediction shows quite well consistence with the observation during all three seasons while the huge difference between observations and model estimation implies that the IRI-2007 model needs significant improvement to predict better the trough region in northern hemisphere.

Xiong, C.; Lühr, H.; Ma, S. Y.

2013-02-01

361

Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

2014-01-01

362

How Does Spatial Study Design Influence Density Estimates from Spatial Capture-Recapture Models?

When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR) models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species.

Sollmann, Rahel; Gardner, Beth; Belant, Jerrold L.

2012-01-01

363

TreeCol: a novel approach to estimating column densities in astrophysical simulations

NASA Astrophysics Data System (ADS)

We present TreeCol, a new and efficient tree-based scheme to calculate column densities in numerical simulations. Knowing the column density in any direction at any location in space is a prerequisite for modelling the propagation of radiation through the computational domain. TreeCol therefore forms the basis for a fast, approximate method for modelling the attenuation of radiation within large numerical simulations. It constructs a HEALPIX sphere at any desired location and accumulates the column density by walking the tree and by adding up the contributions from all tree nodes whose line of sight contributes to the pixel under consideration. In particular, when combined with widely-used tree-based gravity solvers, the new scheme requires little additional computational cost. In a simulation with N resolution elements, the computational cost of TreeCol scales as NlogN, instead of the N5/3 scaling of most other radiative transfer schemes. TreeCol is naturally adaptable to arbitrary density distributions and is easy to implement and to parallelize, particularly if a tree structure is already in place for calculating the gravitational forces. We describe our new method and its implementation into the smoothed particle hydrodynamics (SPH) code GADGET2 (although note that the scheme is not limited to particle-based fluid dynamics). We discuss its accuracy and performance characteristics for the examples of a spherical protostellar core and for the turbulent interstellar medium. We find that the column density estimates provided by TreeCol are on average accurate to better than 10 per cent. In another application, we compute the dust temperatures for solar neighbourhood conditions and compare with the result of a full-fledged Monte Carlo radiation-transfer calculation. We find that both methods give similar answers. We conclude that TreeCol provides a fast, easy to use and sufficiently accurate method of calculating column densities that comes with little additional computational cost when combined with an existing tree-based gravity solver.

Clark, Paul C.; Glover, Simon C. O.; Klessen, Ralf S.

2012-02-01

364

Clarification of carbon content characteristics, on their spatial variability in density, of tropical peatlands is needed for more accurate estimates of the C pools and more detailed C cycle understandings. In this study, the C density characteristics of different peatland types and at various depths within tropical peats in Central Kalimantan were analyzed. The peatland types and the land cover

Sawahiko Shimada; Hidenori Takahashi; Akira Haraguchi; Masami Kaneko

2001-01-01

365

In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic

Yumin Zhang; Qing-Guo Wang; Kai-Yew Lum

2009-01-01

366

The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088

Garde, Ainara; Karlen, Walter; Ansermino, J Mark; Dumont, Guy A

2014-01-01

367

A New Estimate of the Star Formation Rate Density in the HDFN

NASA Astrophysics Data System (ADS)

We measured the evolution of SFRD in the HDFN by comparing the available multi-color information on galaxy SEDs with a library of model fluxes, provided by the codes of Bruzual & Charlot (1993, ApJ 405, 538) and Leitherer et al. (1999, ApJS 123, 3). For each HDFN galaxy the best fitting template was used to estimate the redshift, the amount of dust obscuration and the un-reddened UV density at 1500 Å. The results are plotted in the figure, where a realistic estimate of the errors was obtained by considering the effects of field-to-field variations (Fontana et. al., 1999, MNRAS, 310L). We did not correct for sample incompleteness, and the corrections for dust absorption in the estimates of Connolly et al. (1997, ApJ 486, 11L; C97) and Madau et. al. (1998, ApJ 498, 106; M98) were calculated according to Steidel et. al. (1999, ApJ 519, 1; S99). Our measured points show a peak at z ˜ 3, being consistent with those measured, in the same z interval, from rest-frame FIR emission (Barger et. al., 2000, AJ 119, 2092; SCUBA). We did correct for dust obscuration by estimating the reddening object by object, and not by considering a mean value of E(B -V) as in S99. Such correction does not depend linearlyon E(B -V): we did find a ratio ˜ 14 between un-reddened and reddened SFRD, ˜ 3 times greater than in S99, despite getting a mean value of color excess < E(B - V) > = 0.14 as in S99. Since we did not take into account sample incompleteness and surface brightness dimming effects, the decline of the SFRD at z ˜ 4 could be questionable.

Massarotti, M.; Iovino, A.

368

The genomic RNA of hepatitis C virus (HCV) in the plasma of volunteer blood donors was detected by using the polymerase chain reaction in a fraction of density 1-08 g\\/ml from sucrose density gradient equilibrium centrifugation. When the fraction was treated with the detergent NP40 and recentrifuged in sucrose, the HCV RNA banded at 1.25g\\/ml. Assuming that NP40 removed a

Hideaki Miyamoto; Hiroaki Okamoto; Koei Sato; Takeshi Tanaka; Shunji Mishiro

1992-01-01

369

Voxel-Based Estimation of Plant Area Density from Airborne Laser Scanner Data

NASA Astrophysics Data System (ADS)

Three-dimensional distribution of plant area density (PAD) was retrieved using airborne laser scanning (ALS) data. The calculation of PAD requires the number of laser-pulses intercepted by plant materials and which are not intercepted by trees (i.e., passed laser-pulses) for the spatial unit. To estimate the passed laser-pulses at a voxel (1-m voxel in this study), we traced every laser-return by using its flight line information. The assumption was that every laser-pulse traveled on the orthogonal line to flight line, meaning that the sensor mounted in the aircraft scanned perpendicularly to its flight line. Our function based on this assumption allowed PAD to be calculated. Consequently, we successfully obtained the PAD profiles at every 1-m voxel for the canopy area of 56 trees, which could be useful in the quantitative assessment of canopy structure at a broad scale.

Song, Y.; Maki, M.; Imanishi, J.; Morimoto, Y.

2011-09-01

370

Direct learning of sparse changes in markov networks by density ratio estimation.

We propose a new method for detecting changes in Markov network structure between two sets of samples. Instead of naively fitting two Markov network models separately to the two data sets and figuring out their difference, we directly learn the network structure change by estimating the ratio of Markov network models. This density-ratio formulation naturally allows us to introduce sparsity in the network structure change, which highly contributes to enhancing interpretability. Furthermore, computation of the normalization term, a critical bottleneck of the naive approach, can be remarkably mitigated. We also give the dual formulation of the optimization problem, which further reduces the computation cost for large-scale Markov networks. Through experiments, we demonstrate the usefulness of our method. PMID:24684449

Liu, Song; Quinn, John A; Gutmann, Michael U; Suzuki, Taiji; Sugiyama, Masashi

2014-06-01

371

This paper reports on the problem of simultaneously estimating neutron density and reactivity while operating a nuclear reactor. It is solved by using a bank of Kalman filters as an estimator and applying a probabilistic test to determine which filter of the bank has the best performance.

Cortina, E.; D'Atellis, C.E. (Universidad de Buenos Aires, Centro de Calculo Cientifico, Comision Nacional de Energia Atomica, Buenos Aires (AR))

1990-07-01

372

The (maximum) penalized-likelihood method of probability density estimation and bump-hunting is improved and exemplified by applications to scattering and chondrite data. We show how the hyperparameter in the method can be satisfactorily estimated by using statistics of goodness of fit. A Fourier expansion is found to be usually more expeditious than a Hermite expansion but a compromise is useful. The

I. J. Good; R. A. Gaskins

1980-01-01

373

NASA Technical Reports Server (NTRS)

The structure of the upper atmosphere can be indirectly probed by light in order to determine the global density structure of ozone, aerosols, and neutral atmosphere. Scattered and directly transmitted light is measured by a satellite and is shown to be a nonlinear function of the state which is defined to be a point-wise decomposition of the density profiles. Dynamics are imposed on the state vector and a structured estimation problem is developed. The estimation of these densities is then performed using a linearized Kalman-Bucy filter and a linearized Kushner-Stratonovich filter.

Mcgarty, T. P.

1971-01-01

374

NASA Astrophysics Data System (ADS)

The goal of the present study is to employ the source imaging methods such as cortical current density estimation for the classification of left- and right-hand motor imagery tasks, which may be used for brain-computer interface (BCI) applications. The scalp recorded EEG was first preprocessed by surface Laplacian filtering, time-frequency filtering, noise normalization and independent component analysis. Then the cortical imaging technique was used to solve the EEG inverse problem. Cortical current density distributions of left and right trials were classified from each other by exploiting the concept of Von Neumann entropy. The proposed method was tested on three human subjects (180 trials each) and a maximum accuracy of 91.5% and an average accuracy of 88% were obtained. The present results confirm the hypothesis that source analysis methods may improve accuracy for classification of motor imagery tasks. The present promising results using source analysis for classification of motor imagery enhances our ability of performing source analysis from single trial EEG data recorded on the scalp, and may have applications to improved BCI systems.

Kamousi, Baharan; Nasiri Amini, Ali; He, Bin

2007-06-01

375

A study of the use of polyethylene glycol in estimating cholesterol in high-density lipoprotein.

We studied polyethylene glycol 6000 precipitation of lipoproteins other than high-density lipoproteins, before cholesterol is estimated in the supernate. Other lipoproteins in the supernatant fractions were detected by using rocket immunoelectrophoresis. A polyethylene glycol concentration of 75 g/L in the final mixture appeared to be optimal, and results agreed with those obtained by ultracentrifugation. Differences in serum pH, use of polyethylene glycol from different suppliers, or the presence of ethylenediaminetetraacetate resulted in values that differed significantly (by 40 to 60 mumol/L) from the reference values. Polyethylene glycol did not interfere in four different methods for determination of cholesterol. In combination with an enzymic cholesterol method, the polyethylene glycol method appeared to be very precise, even when lipemic sera (triglycerides up to 5.5 mmol/L) were analyzed that had diminished high-density lipoprotein cholesterol values. We consider this method a method of choice, especially when lipemic sera are tested and enzymic cholesterol analysis is used. PMID:7438421

Demacker, P N; Hijmans, A G; Vos-Janssen, H E; van't Laar, A; Jansen, A P

1980-12-01

376

Density estimates of Panamanian owl monkeys (Aotus zonalis) in three habitat types.

The resolution of the ambiguity surrounding the taxonomy of Aotus means data on newly classified species are urgently needed for conservation efforts. We conducted a study on the Panamanian owl monkey (Aotus zonalis) between May and July 2008 at three localities in Chagres National Park, located east of the Panama Canal, using the line transect method to quantify abundance and distribution. Vegetation surveys were also conducted to provide a baseline quantification of the three habitat types. We observed 33 individuals within 16 groups in two out of the three sites. Population density was highest in Campo Chagres with 19.7 individuals/km(2) and intermediate densities of 14.3 individuals/km(2) were observed at Cerro Azul. In la Llana A. zonalis was not found to be present. The presence of A. zonalis in Chagres National Park, albeit at seemingly low abundance, is encouraging. A longer-term study will be necessary to validate the further abundance estimates gained in this pilot study in order to make conservation policy decisions. PMID:19852005

Svensson, Magdalena S; Samudio, Rafael; Bearder, Simon K; Nekaris, K Anne-Isola

2010-02-01

377

Density estimation in aerial images of large crowds for automatic people counting

NASA Astrophysics Data System (ADS)

Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

Herrmann, Christian; Metzler, Juergen

2013-05-01

378

Robust estimation of mammographic breast density: a patient-based approach

NASA Astrophysics Data System (ADS)

Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).

Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas

2012-02-01

379

mBEEF: an accurate semi-local Bayesian error estimation density functional.

We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations. PMID:24735288

Wellendorff, Jess; Lundgaard, Keld T; Jacobsen, Karsten W; Bligaard, Thomas

2014-04-14

380

NASA Astrophysics Data System (ADS)

Spectral estimation of irregularly sampled velocity data issued from Laser Doppler Anemometry measurements is considered in this paper. A new method is proposed based on linear interpolation followed by a deconvolution procedure. In this method, the analytic expression of the autocorrelation function of the interpolated data is expressed as a linear function of the autocorrelation function of the data to be estimated. For the analysis of both simulated and experimental data, the results of the proposed method is compared with the one of the reference methods in LDA: refinement of autocorrelation function of sample-and-hold interpolated signal method given by Nobach et al. (Exp Fluids 24:499-509, 1998), refinement of power spectral density of sample-and-hold interpolated signal method given by Simon and Fitzpatrick (Exp Fluids 37:272-280, 2004) and fuzzy slotting technique with local normalization and weighting algorithm given by Nobach (Exp Fluids 32:337-345, 2002). Based on these results, it is concluded that the performances of the proposed method are better than the one of the other methods, especially for what concerns bias and variance.

Moreau, S.; Plantier, G.; Valière, J.-C.; Bailliet, H.; Simon, L.

2011-01-01

381

The minimum description length principle for probability density estimation by regular histograms

NASA Astrophysics Data System (ADS)

The minimum description length principle is a general methodology for statistical modeling and inference that selects the best explanation for observed data as the one allowing the shortest description of them. Application of this principle to the important task of probability density estimation by histograms was previously proposed. We review this approach and provide additional illustrative examples and an application to real-world data, with a presentation emphasizing intuition and concrete arguments. We also consider alternative ways of measuring the description lengths, that can be found to be more suited in this context. We explicitly exhibit, analyze and compare, the complete forms of the description lengths with formulas involving the information entropy and redundancy of the data, and not given elsewhere. Histogram estimation as performed here naturally extends to multidimensional data, and offers for them flexible and optimal subquantization schemes. The framework can be very useful for modeling and reduction of complexity of observed data, based on a general principle from statistical information theory, and placed within a unifying informational perspective.

Chapeau-Blondeau, François; Rousseau, David

2009-09-01

382

Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

NASA Technical Reports Server (NTRS)

This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

2010-01-01

383

mBEEF: An accurate semi-local Bayesian error estimation density functional

NASA Astrophysics Data System (ADS)

We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

2014-04-01

384

Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

NASA Astrophysics Data System (ADS)

We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

2012-12-01

385

Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation. PMID:24531728

Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

2014-01-01

386

Accuracy of Estimation of Genomic Breeding Values in Pigs Using Low-Density Genotypes and Imputation

Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65?0.68). Using genotypes imputed from a large reference panel (accuracy: R2 = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R2 = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.

Badke, Yvonne M.; Bates, Ronald O.; Ernst, Catherine W.; Fix, Justin; Steibel, Juan P.

2014-01-01

387

Noise-Resistant Wavelet-Based Bayesian Fusion of Multispectral and Hyperspectral Images

In this paper, a technique is presented for the fusion of multispectral (MS) and hyperspectral (HS) images to enhance the spatial resolution of the latter. The technique works in the wavelet domain and is based on a Bayesian estimation of the HS image, assuming a joint normal model for the images and an additive noise imaging model for the HS

Yifan Zhang; Steve De Backer; Paul Scheunders

2009-01-01

388

Wavelet-based relative prefix sum methods for range sum queries in data cubes

Data mining and related applications often rely on extensive range sum queries and thus, it is important for these queries to scale well. Range sum queries in data cubes can be achieved in time ) query and update methods twice as fast as Haarbased methods. Moreover, since these new methods are pyramidal, they provide incrementally improving estimates.

Daniel Lemire

2002-01-01

389

HRV and BPV neural network model with wavelet based algorithm calibration

The heart rate and blood pressure power spectrum, especially the power of the low frequency (LF) and high frequency (HF) components, have been widely used in the last decades for quantification of both autonomic function and respiratory activity. Discrete Wavelet Transform (DWT) is an important tool in this field. The paper presents a LF and HF fast estimator that uses

G. Postolache; L. Silva Carvalho; O. Postolache; P. Girão; I. Rocha

2009-01-01

390

On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

NASA Astrophysics Data System (ADS)

We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions {mathcal{R}_g subset mathbb{C}} for which there can be uniform {L^{2n/n+2}to L^{2n/n-2}} resolvent estimates for {?_g + ?} , {? in mathcal{R}_g} , where {?_g} is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension {n ? 3} . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from {?} to the spectrum of {?_{S^n}} . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, {mathbb{T}^n = mathbb{R}^n/mathbb{Z}^n} , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for {(?_g + ?)^{-1}} . Ours is based on the related Hadamard parametrices for {\\cos t sqrt{-?_g}} , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of {sqrt{-?_g}} , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

2014-06-01

391

Long-range dependence in the volatility of commodity futures prices: Wavelet-based evidence

NASA Astrophysics Data System (ADS)

Commodity futures have long been used to facilitate risk management and inventory stabilization. The study of commodity futures prices has attracted much attention in the literature because they are highly volatile and because commodities represent a large proportion of the export value in many developing countries. Previous research has found apparently contradictory findings about the presence of long memory or more generally, long-range dependence. This note investigates the nature of long-range dependence in the volatility of 14 energy and agricultural commodity futures price series using the improved Hurst coefficient ( H) estimator of Abry, Teyssière and Veitch. This estimator is motivated by the ability of wavelets to detect self-similarity and also enables a test for the stability of H. The results show evidence of long-range dependence for all 14 commodities and of a non-stationary H for 9 of 14 commodities.

Power, Gabriel J.; Turvey, Calum G.

2010-01-01

392

Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

2014-01-01

393

In this paper, a new robust linearly constrained constant modulus (LCCM) approach with inverse QRD-RLS algorithm is derived and applied to the multi-carrier code division multiple access (MC-CDMA) system. The proposed algorithm can be employed to reduce the MAI efficiently, due to other users and combat the mismatch problem, when channel parameters can not be estimated perfectly. We show that

Shiunn-jang Chern; Chung-yao Chang; Hsiao-chen Liu

2002-01-01

394

X-Ray Methods to Estimate Breast Density Content in Breast Tissue

NASA Astrophysics Data System (ADS)

This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ? fibrous, lucite ? cancer and polyet ? fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating techniques and one photon counting technique. Contrast, signal to noise ratio (SNR) and pixel values between different regions of interest were analyzed. The two basis function method and two of the energy integrating methods (calibration, beam hardening correction) gave the highest and more linear curves for contrast and SNR.

Maraghechi, Borna

395

NASA Astrophysics Data System (ADS)

This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain.

Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

2004-07-01

396

NASA Astrophysics Data System (ADS)

A new method for designing two-channel causal stable IIR PR filter banks and wavelet bases is proposed. It is based on the structure previously proposed by Phoong et al. (1995). Such a filter bank is parameterized by two functions (alpha) (z) and (beta) (z), which can be chosen as an all-pass function to obtain IIR filterbanks with very high stopband attenuation. One of the problems with this choice is that a bump of about 4 dB always exists near the transition band of the analysis and synthesis filters. The stopband attenuation of the high-pass analysis filter is also 10 dB lower than that of the low-pass filter. By choosing (beta) (z) and (alpha) (z) as an all-pass function and a type-II linear- phase finite impulse response function, respectively, the bumping can be significantly suppressed. In addition, the stopband attenuation of the high-pass filter can be controlled easily. The design problem is formulated as a polynomial approximation problem and is solved efficiently by the Remez exchange algorithm. The extension of this method to the design of a class of IIR wavelet basis is also considered.

Mao, J. S.; Chan, S. C.; Ho, Ka L.

2000-10-01

397

NASA Technical Reports Server (NTRS)

The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

2001-01-01

398

Wab-InSAR: a new wavelet based InSAR time series technique applied to volcanic and tectonic areas

NASA Astrophysics Data System (ADS)

Modern geodetic techniques such as InSAR and GPS provide valuable observations of the deformation field. Because of the variety of environmental interferences (e.g., atmosphere, topography distortion) and incompleteness of the models (assumption of the linear model for deformation), those observations are usually tainted by various systematic and random errors. Therefore we develop and test new methods to identify and filter unwanted periodic or episodic artifacts to obtain accurate and precise deformation measurements. Here we present and implement a new wavelet based InSAR (Wab-InSAR) time series approach. Because wavelets are excellent tools for identifying hidden patterns and capturing transient signals, we utilize wavelet functions for reducing the effect of atmospheric delay and digital elevation model inaccuracies. Wab-InSAR is a model free technique, reducing digital elevation model errors in individual interferograms using a 2D spatial Legendre polynomial wavelet filter. Atmospheric delays are reduced using a 3D spatio-temporal wavelet transform algorithm and a novel technique for pixel selection. We apply Wab-InSAR to several targets, including volcano deformation processes at Hawaii Island, and mountain building processes in Iran. Both targets are chosen to investigate large and small amplitude signals, variable and complex topography and atmospheric effects. In this presentation we explain different steps of the technique, validate the results by comparison to other high resolution processing methods (GPS, PS-InSAR, SBAS) and discuss the geophysical results.

Walter, T. R.; Shirzaei, M.; Nankali, H.; Roustaei, M.

2009-12-01

399

Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

NASA Technical Reports Server (NTRS)

This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.

Grau, David

2012-01-01

400

The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

2002-01-01

401

Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

1982-12-16

402

Wavelet-based fractal analysis of InterMagnet Observatories data

NASA Astrophysics Data System (ADS)

The main goal of the paper is to used the so-called Hurst exponent estimated by linear regression of the modulus of the continuous wavelet transform of the horizontal component of a given InterMagnetc observatory data versus the scales for geomagnetic storms prediction and analysis. Application to Wingst observatory data of the Mai 2002 period shows clearly that the Hurst exponent can be used as an index for geomagnetic storms analysis, prediction and detection. Keywords: Hurst exponent, wavelet transform, storms, index, prediction, detection.

Aliouane, L.; Ouadfeul, S.

2013-09-01

403

NASA Astrophysics Data System (ADS)

Reliability of microseismic interpretations is very much dependent on how robustly microseismic events are detected and picked. Various event detection algorithms are available but detection of weak events is a common challenge. Apart from the event magnitude, hypocentral distance, and background noise level, the instrument self-noise can also act as a major constraint for the detection of weak microseismic events in particular for borehole deployments in quiet environments such as below 1.5-2 km depths. Instrument self-noise levels that are comparable or above background noise levels may not only complicate detection of weak events at larger distances but also challenge methods such as seismic interferometry which aim at analysis of coherent features in ambient noise wavefields to reveal subsurface structure. In this paper, we use power spectral densities to estimate the instrument self-noise for a borehole data set acquired during a hydraulic fracturing stimulation using modified 4.5-Hz geophones. We analyse temporal changes in recorded noise levels and their time-frequency variations for borehole and surface sensors and conclude that instrument noise is a limiting factor in the borehole setting, impeding successful event detection. Next we suggest that the variations of the spectral powers in a time-frequency representation can be used as a new criterion for event detection. Compared to the common short-time average/long-time average method, our suggested approach requires a similar number of parameters but with more flexibility in their choice. It detects small events with anomalous spectral powers with respect to an estimated background noise spectrum with the added advantage that no bandpass filtering is required prior to event detection.

Vaezi, Y.; van der Baan, M.

2014-05-01

404

A wavelet-based method for local phase extraction from a multi-frequency oscillatory signal.

One of the challenges in analyzing neuronal activity is to correlate discrete signal, such as action potentials with a signal having a continuous waveform such as oscillating local field potentials (LFPs). Studies in several systems have shown that some aspects of information coding involve characteristics that intertwine both signals. An action potential is a fast transitory phenomenon that occurs at high frequencies whereas a LFP is a low frequency phenomenon. The study of correlations between these signals requires a good estimation of both instantaneous phase and instantaneous frequency. To extract the instantaneous phase, common techniques rely on the Hilbert transform performed on a filtered signal, which discards temporal information. Therefore, time-frequency methods are best fitted for non-stationary signals, since they preserve both time and frequency information. We propose a new algorithmic procedure that uses wavelet transform and ridge extraction for signals that contain one or more oscillatory frequencies and whose oscillatory frequencies may shift as a function of time. This procedure provides estimates of phase, frequency and temporal features. It can be automated, produces manageable amounts of data and allows human supervision. Because of such advantages, this method is particularly suitable for analyzing synchronization between LFPs and unitary events. PMID:17049617

Roux, Stéphane G; Cenier, Tristan; Garcia, Samuel; Litaudon, Philippe; Buonviso, Nathalie

2007-02-15

405

We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of ?1.625?dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters.

Adib, Mani; Cretu, Edmond

2013-01-01

406

Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

2011-05-15

407

Population Densities of Rhizobium japonicum Strain 123 Estimated Directly in Soil and Rhizospheres †

Rhizobium japonicum serotype 123 was enumerated in soil and rhizospheres by fluorescent antibody techniques. Counting efficiency was estimated to be about 30%. Indigenous populations of strain 123 ranged from a few hundred to a few thousand per gram of field soil before planting. Rhizosphere effects from field-grown soybean plants were modest, reaching a maximum of about 2 × 104 cells of strain 123 per g of inner rhizosphere soil in young (16-day-old) plants. Comparably slight rhizosphere stimulation was observed with field corn. High populations of strain 123 (2 × 106 to 3 × 106 cells per g) were found only in the disintegrating taproot rhizospheres of mature soybeans at harvest, and these populations declined rapidly after harvest. Pot experiments with the same soil provided data similar to those derived from the field experiments. Populations of strain 123 reached a maximum of about 105 cells per g of soybean rhizosphere soil, but most values were lower and were only slightly higher than values in wheat rhizosphere soil. Nitrogen treatments had little effect on strain 123 densities in legume and nonlegume rhizospheres or on the nodulation success of strain 123. No evidence was obtained for the widely accepted theory of specific stimulation, which has been proposed to account for the initiation of the Rhizobium-legume symbiosis.

Reyes, V. G.; Schmidt, E. L.

1979-01-01

408

Using volumetric density estimation in computer aided mass detection in mammography

NASA Astrophysics Data System (ADS)

With the introduction of Full Field Digital Mammography (FFDM) accurate automatic volumetric breast density (VBD) estimation has become possible. As VBD enables the design of features that incorporate 3D properties, these methods offer opportunities for computer aided detection schemes. In this study we use VBD to develop features that represent how well a segmented region resembles the projection of a spherical object. The idea behind this is that due to compression of the breast, glandular tissue is likely to be compressed to a disc like shape, whereas cancerous tissue, being more difficult to compress, will retain its uncompressed shape. For each pixel in a segmented region we calculate the predicted dense tissue thickness assuming that the lesion has a spherical shape. The predicted thickness is then compared to the observed thickness by calculating the slope of a linear function relating the two. In addition we calculate the variance of the error of the fit. To evaluate the contribution of the developed VBD features to our CAD system we use an FFDM dataset consisting of 266 cases, of which 103 were biopsy proven malignant masses and 163 normals. It was found that compared to the false positives, a large fraction of the true positives has a slope close to 1.0 indicating that the true positives fit the modeled spheres best. When the VBD based features were added to our CAD system, aimed at the detection and classification of malignant masses, a small but significant increase in performance was achieved.

Kallenberg, Michiel; Karssemeijer, Nico

2009-02-01

409

A method for estimating the cholesterol content of the serum low-density lipoprotein fraction (Sf- 0.20)is presented. The method involves measure- ments of fasting plasma total cholesterol, tri- glyceride, and high-density lipoprotein cholesterol concentrations, none of which requires the use of the preparative ultracentrifuge. Cornparison of this suggested procedure with the more direct procedure, in which the ultracentrifuge is used, yielded

William T. Friedewald; Robert I. Levy; Dona