Note: This page contains sample records for the topic wavelet-based density estimation from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: November 12, 2013.
1

Wavelet-based density estimation for noise reduction in plasma simulations using particles  

SciTech Connect

For given computational resources, one of the main limitations in the accuracy of plasma simulations using particles comes from the noise due to limited statistical sampling in the reconstruction of the particle distribution function. A method based on wavelet multiresolution analysis is proposed and tested to reduce this noise. The method, known as wavelet based density estimation (WBDE), was previously introduced in the statistical literature to estimate probability densities given a nite number of independent measurements. Its novel application to plasma simulations can be viewed as a natural extension of the nite size particles (FSP) approach, with the advantage of estimating more accurately distribution functions that have localized sharp features. The proposed method preserves the moments of the particle distribution function to a good level of accuracy, has no constraints on the dimensionality of the system, does not require an a priori selection of a global smoothing scale, and its able to adapt locally to the smoothness of the density based on the given discrete particle data. Most importantly, the computational cost of the denoising stage is of the same order as one timestep of a FSP simulation. The method is compared with a recently proposed proper orthogonal decomposition based method, and it is tested with particle data corresponding to strongly collisional, weakly collisional, and collisionless plasmas simulations.

Nguyen van yen, Romain [Laboratoire de Meteorologie Dynamique-CNRL, Ecole Normale Superieure; Del-Castillo-Negrete, Diego B [ORNL; Schneider, Kai [Universite d'Aix-Marseille; Farge, Marie [Laboratoire de Meteorologie Dynamique-CNRL, Ecole Normale Superieure; Chen, Guangye [ORNL

2010-01-01

2

Optimal a priori clipping estimation for wavelet-based method of moments matrices  

Microsoft Academic Search

Wavelet bases are mainly used in the methods of moments (MoM) to render the system matrix sparse; clipping entries below a given threshold is an essential operation to obtain the desired sparse matrix. In this paper, we present a novel a priori way to estimate the clipping threshold that can be used if one wants an error on the solution

Francesco P. Andriulli; Giuseppe Vecchi; Francesca Vipiana; Paola Pirinoli; Anita Tabacco

2005-01-01

3

The wavelet-based multi-resolution motion estimation using temporal aliasing detection  

Microsoft Academic Search

In this paper, we propose a new algorithm for wavelet-based multi-resolution motion estimation (MRME) using temporal aliasing detection (TAD). In wavelet transformed image\\/video signals, temporal aliasing will be severe as the motion of object increases, causing the performance of the conventional MRME algorithms to drop. To overcome this problem, we perform the temporal aliasing detection and MRME simultaneously instead of

Teahyung Lee; David V. Anderson

2007-01-01

4

Improved Direction-of-Arrival Estimation Using Wavelet Based Denoising Techniques  

Microsoft Academic Search

In this paper, we explore the use of wavelet based denoising techniques to improve the Direction-of-Arrival (DOA) estimation\\u000a performance of array processors at low SNR. Traditional single sensor wavelet denoising techniques are not suitable for this\\u000a application since they fail to preserve the intersensor signal correlation. We propose two correlation preserving techniques\\u000a for denoising multi-sensor signals: (1) the Temporal Wavelet

R. Sathish; G. V. Anand

2006-01-01

5

Estimation of Differential Photometry in Adaptive Optics Observations with a Wavelet-based Maximum Likelihood Estimator  

NASA Astrophysics Data System (ADS)

We propose to use the Bayesian framework and the wavelet transform (WT) to estimate differential photometry in binary systems imaged with adaptive optics (AO). We challenge the notion that Richardson-Lucy-type algorithms are not suitable to AO observations because of the mismatch between the target's and reference star's point-spread functions (PSFs). Using real data obtained with the Lick Observatory AO system on the 3 m Shane telescope, we first obtain a deconvolved image by means of the Adaptive Wavelets Maximum Likelihood Estimator (AWMLE) approach. The algorithm reconstructs an image that maximizes the compound Poisson and Gaussian likelihood of the data. It also performs wavelet decomposition, which helps to distinguish signal from noise, and therefore it aides the stopping rule. We test photometric precision of that approach versus PSF-fitting with the StarFinder package for companions located within the halo created by the bright star. Simultaneously, we test the susceptibility of both approaches to error in the reference PSF, as quantified by the difference in the Strehl ratio between the science and calibration PSFs. We show that AWMLE is capable of producing better results than PSF-fitting. More importantly, we have developed a methodology for testing photometric codes for AO observations.

Baena Gallé, Roberto; Gladysz, Szymon

2011-07-01

6

Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance  

Microsoft Academic Search

We present a statistical view of the texture retrieval problem by combining the two related tasks, namely feature extraction (FE) and similarity measurement (SM), into a joint modeling and classification scheme. We show that using a consistent estimator of texture model parameters for the FE step followed by computing the Kullback-Leibler distance (KLD) between estimated models for the SM step

Minh N. Do; Martin Vetterli

2002-01-01

7

Estimating a Monotone Density.  

National Technical Information Service (NTIS)

Local and global results on estimating a monotone density are discussed. The fact that a centered version of the L1-distance between a smooth strictly decreasing density and its maximum likelihood estimator is asymptotically normal, is proved. This distan...

P. Groenenboom

1984-01-01

8

Wavelet Based Estimation of a Semi Parametric Generalized Linear Model of fMRI Time-Series  

Microsoft Academic Search

This work provides a new approach to estimate the param- eters of a semi-parametric generalized linear model in the wavelet domain. The method is illustrated with the prob- lem of detecting significant changes in fMRI signals that are correlated to a stimulus time course. The fMRI signal is described as the sum of two effects : a smooth trend and

François G. Meyer

2003-01-01

9

Wavelet-based estimation of a semiparametric generalized linear model of fMRI time-series  

Microsoft Academic Search

Addresses the problem of detecting significant changes in fMRI time series that are correlated to a stimulus time course. This paper provides a new approach to estimate the parameters of a semiparametric generalized linear model of the fMRI time series. The fMRI signal is described as the sum of two effects: a smooth trend and the response to the stimulus.

François G. Meyer

2003-01-01

10

Fuzzy density estimation  

Microsoft Academic Search

A new approach to density estimation with fuzzy random variables (FRV) is developed. In this approach, three methods (histogram,\\u000a empirical c.d.f., and kernel methods) are extended for density estimation based on ?-cuts of FRVs.

Mohsen Arefi; Reinhard Viertl; S. Mahmoud Taheri

2012-01-01

11

Minimum complexity density estimation  

Microsoft Academic Search

The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum complexity estimator converges to

Andrew R. Barron; Thomas M. Cover

1991-01-01

12

Contingent Kernel Density Estimation  

PubMed Central

Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a “contingent kernel density estimation” technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.

Fortmann-Roe, Scott; Starfield, Richard; Getz, Wayne M.

2012-01-01

13

A Wavelet-Based Image Denoising Technique Using Spatial Priors  

Microsoft Academic Search

We propose a new wavelet-based method for image denoising that applies the Bayesian framework, using prior knowledge about the spatial clustering of the wavelet coefficients. Local spatial interactions of the wavelet coefficients are modeled by adopting a Markov Random Field model. An iterative updating technique known as iterated conditional modes (ICM) is applied to estimate the binary masks containing the

Aleksandra Pizurica; Wilfried Philips; Ignace Lemahieu; Marc Acheroy

2000-01-01

14

A bivariate shrinkage function for wavelet-based denoising  

Microsoft Academic Search

Most simple nonlinear thresholding rules for wavelet-based denoising assume the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependency. In this paper, a new heavy-tailed bivariate pdf is proposed to model the statistics of wavelet coefficients, and a simple nonlinear threshold function (shrinkage function) is derived from the pdf using Bayesian estimation theory. The new shrinkage

Levent Sendur; Ivan W. Selesnick

2002-01-01

15

FAST GEM WAVELET-BASED IMAGE DECONVOLUTION ALGORITHM  

Microsoft Academic Search

The paper proposes a new wavelet-based Bayesian approach to image deconvolution, under the space-invariant blur and ad- ditive white Gaussian noise assumptions. Image deconvolution exploits the well known sparsity of the wavelet coefficients, de- scribed by heavy-tailed priors. The present approach admits any prior given by a linear (finite of infinite) combination of Gaussian densities. To compute the maximum a

M. B. Dias; Torre Norte

2003-01-01

16

Evaluation of various wavelet bases for use in wavelet-based multiresolution expectation maximization image reconstruction algorithm for PET  

NASA Astrophysics Data System (ADS)

Maximum Likelihood (ML) estimation based Expectation Maximization (EM) reconstruction algorithm has shown to provide good quality reconstruction for PET. Our previous work introduced the multigrid EM (MGEM) and multiresolution (MREM) and Wavelet based Multiresolution EM (WMREM) algorithm for PET image reconstruction. This paper investigates the use of various wavelets in the new Wavelet based Multiresolution EM (WMREM) algorithm. The wavelets are used to construct a multiresolution data space, which is then used in the estimation process. The beauty of the wavelet transform to provide localized frequency-space representation of the data allows us to perform the estimation using these decomposed components. The advantage of this method lies with the fact that the noise in the acquired data becomes localized in the high-high or diagonal frequency bands and not using these bands for estimation at coarser resolution helps speed up the recovery of various frequency components with reduced noise estimation. Different wavelet bases result in different reconstructions. Custom wavelets are designed for the reconstruction process and these wavelets provide better results than the commonly known wavelets. The WMREM reconstruction algorithm is implemented to reconstruct simulated phantom data and real data.

Raheja, Amar; Dhawan, Atam P.

2000-06-01

17

Wavelet Based Approach to Transmitter Identification.  

National Technical Information Service (NTIS)

Research is conducted to find a robust wavelet based algorithm for automatic identification of push to talk radio transmitters. Digital data at an intermediate frequency (IF) is preprocessed to translate it into a form applicable to wavelet analysis. The ...

R. D. Hippenstiel

1995-01-01

18

Wavelet Based Coding of Images and Video.  

National Technical Information Service (NTIS)

The main goal of this project was to study and develop wavelet-based image and video compression algorithms, with focuses on algorithmic performance, image quality, and bandwidth optimization. This was accomplished by applying advanced statistical modelin...

M. T. Orchard

2001-01-01

19

Density estimation by wavelet thresholding  

Microsoft Academic Search

Density estimation is a commonly used test case for nonparametric estimation methods. We explore the asymptotic properties of estimators based on thresholding of empirical wavelet coefficients. Minimax rates of convergence are studied over a large range of Besov function classes $B_{\\\\sigma pq}$ and for a range of global $L'_p$ error measures, $1 \\\\leq p' < \\\\infty$. A single wavelet threshold

David L. Donoho; Iain M. Johnstone; Gérard Kerkyacharian; Dominique Picard

1996-01-01

20

Density Estimation in Linear Time  

Microsoft Academic Search

We consider the problem of choosing a den- sity estimate from a set of densities F, min- imizing the L1-distance to an unknown dis- tribution. Devroye and Lugosi (DL01) ana- lyze two algorithms for the problem: Scheffe tournament winner and minimum distance es- timate. The Scheffe tournament estimate re- quires fewer computations than the minimum distance estimate, but has strictly

Satyaki Mahalanabis; Daniel Stefankovic

2008-01-01

21

WSPM: wavelet-based statistical parametric mapping.  

PubMed

Recently, we have introduced an integrated framework that combines wavelet-based processing with statistical testing in the spatial domain. In this paper, we propose two important enhancements of the framework. First, we revisit the underlying paradigm; i.e., that the effect of the wavelet processing can be considered as an adaptive denoising step to "improve" the parameter map, followed by a statistical detection procedure that takes into account the non-linear processing of the data. With an appropriate modification of the framework, we show that it is possible to reduce the spatial bias of the method with respect to the best linear estimate, providing conservative results that are closer to the original data. Second, we propose an extension of our earlier technique that compensates for the lack of shift-invariance of the wavelet transform. We demonstrate experimentally that both enhancements have a positive effect on performance. In particular, we present a reproducibility study for multi-session data that compares WSPM against SPM with different amounts of smoothing. The full approach is available as a toolbox, named WSPM, for the SPM2 software; it takes advantage of multiple options and features of SPM such as the general linear model. PMID:17689101

Van De Ville, Dimitri; Seghier, Mohamed L; Lazeyras, François; Blu, Thierry; Unser, Michael

2007-06-19

22

Wavelet-Based Deconvolution for Ill-Conditioned Systems.  

National Technical Information Service (NTIS)

This thesis proposes a new approach to wavelet-based image deconvolution that comprises Fourier-domain system inversion followed by wavelet-domain noise suppression. In contrast to other wavelet-based deconvolution approaches, the algorithm employs a regu...

R. Neelamani

1999-01-01

23

GOLDEN RATIO-HAAR WAVELET BASED STEGANOGRAPHY  

Microsoft Academic Search

In this paper, we have presented the golden ratio-Haar wavelet based multimedia steganography. The key features of the proposed method are: 1. New Haar wavelet structure based on the Fibonacci se- quence, and Golden Ratio. 2. Parametric transform dependency, as decryption key, on the security of the sensitive data. One of the important differences between the existing trans- form based

Sos S. Agaian; Okan Caglayan; Juan Pablo Perez; Hakob Sarukhanyan; Jaakko Astola

24

Wavelet based rate scalable video compression  

Microsoft Academic Search

In this paper, we present a new wavelet based rate scalable video compression algorithm. We will refer to this new technique as the scalable adaptive motion compensated wavelet (SAMCoW) algorithm. SAMCoW uses motion compensation to reduce temporal redundancy. The prediction error frames and the intracoded frames are encoded using an approach similar to the embedded zerotree wavelet (EZW) coder. An

Ke Shen; Edward J. Delp

1999-01-01

25

Asymptotic distribution of a histogram density estimator  

SciTech Connect

Two theorems on the asymptotic distribution of a histogram density estimator based on randomly determined spacings introduced by Van Ryzin in 1973 are stated and proved. One theorem gives conditions for the pointwise asymptotic normality of the density estimator for points in the support of the density at which the density is continuously differentiable. A second theorem gives conditions for the pointwise asymptotic normality of the density estimator with a faster convergence rate for points in the support of the density at which the density is twice continuously differentiable. The results are used to compare the relative asymptotic efficiencies of the histogram estimator with the kernal method of density estimation.

Kim, B K; Van Ryzin, J

1980-06-01

26

Bayesian Wavelet-Based Image Denoising Using the Gauss-Hermite Expansion  

Microsoft Academic Search

The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of

S. M. Mahbubur Rahman; M. Omair Ahmad; M. N. S. Swamy

2008-01-01

27

Wavelet-based LASSO in functional linear regression  

PubMed Central

In linear regression with functional predictors and scalar responses, it may be advantageous, particularly if the function is thought to contain features at many scales, to restrict the coefficient function to the span of a wavelet basis, thereby converting the problem into one of variable selection. If the coefficient function is sparsely represented in the wavelet domain, we may employ the well-known LASSO to select a relatively small number of nonzero wavelet coefficients. This is a natural approach to take but to date, the properties of such an estimator have not been studied. In this paper we describe the wavelet-based LASSO approach to regressing scalars on functions and investigate both its asymptotic convergence and its finite-sample performance through both simulation and real-data application. We compare the performance of this approach with existing methods and find that the wavelet-based LASSO performs relatively well, particularly when the true coefficient function is spiky. Source code to implement the method and data sets used in the study are provided as supplemental materials available online.

Zhao, Yihong; Ogden, R. Todd; Reiss, Philip T.

2011-01-01

28

Adaptively wavelet-based image denoising algorithm with edge preserving  

NASA Astrophysics Data System (ADS)

A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband. Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

Tan, Yihua; Tian, Jinwen; Liu, Jian

2006-02-01

29

Discrimination of walking patterns using wavelet-based fractal analysis.  

PubMed

In this paper, we attempted to classify the acceleration signals for walking along a corridor and on stairs by using the wavelet-based fractal analysis method. In addition, the wavelet-based fractal analysis method was used to evaluate the gait of elderly subjects and patients with Parkinson's disease. The triaxial acceleration signals were measured close to the center of gravity of the body while the subject walked along a corridor and up and down stairs continuously. Signal measurements were recorded from 10 healthy young subjects and 11 elderly subjects. For comparison, two patients with Parkinson's disease participated in the level walking. The acceleration signal in each direction was decomposed to seven detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 7 to 1 were calculated. The fractal dimension of the acceleration signal was then estimated from the slope of the variance progression. The fractal dimensions were significantly different among the three types of walking for individual subjects (p < 0.01) and showed a high reproducibility. Our results suggest that the fractal dimensions are effective for classifying the walking types. Moreover, the fractal dimensions were significantly higher for the elderly subjects than for the young subjects (p < 0.01). For the patients with Parkinson's disease, the fractal dimensions tended to be higher than those of healthy subjects. These results suggest that the acceleration signals change into a more complex pattern with aging and with Parkinson's disease, and the fractal dimension can be used to evaluate the gait of elderly subjects and patients with Parkinson's disease. PMID:12503784

Sekine, Masaki; Tamura, Toshiyo; Akay, Metin; Fujimoto, Toshiro; Togawa, Tatsuo; Fukui, Yasuhiro

2002-09-01

30

On minimum distance estimators for unimodal densities  

Microsoft Academic Search

It is shown that minimum distance estimators for families of unimodal densities are always consistent; the rate of convergence is indicated. An algorithm is proposed for computing the minimum distance estimator for the family of all unimodal densities. References are given to the maximum likelihood method and the kernel method.

R.-D. Reiss

1976-01-01

31

Sleep-stage Characterization by Nonlinear EEG Analysis using Wavelet-based Multifractal Formalism  

Microsoft Academic Search

The wavelet-based multifractal formalism was applied on sleep EEG analysis and sleep-stage characterization. The subjects used in this study were randomly selected from the MIT-BIH Polysomnographic Database. The multifractal singularity spectra of sleep EEG signals were estimated, and h0, the Holder exponent that denotes the main singular property of the signal, was extracted from the multifractal singularity spectrum and used

Qianli Ma; Xinbao Ning; Jun Wang; Jing Li

2005-01-01

32

Vector quantization and density estimation  

Microsoft Academic Search

The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a collection of codebooks, one for each possible class or model, if codewords

Robert M. Gray; Richard A. Olshen

1997-01-01

33

Denoising of Ocean Acoustic Signals using Wavelet-Based Techniques.  

National Technical Information Service (NTIS)

This thesis investigates the use of wavelets, wavelet packets, and cosine packet signal decompositions for the removal of noise from underwater acoustic signals. Several wavelet based denoising techniques are presented and their performances compared. Res...

R. J. Barsanti

1996-01-01

34

Wavelet-Based Adaptive Denoising of Phonocardiographic Records.  

National Technical Information Service (NTIS)

The various noise components make the diagnostic evaluation of phonocardiographic records difficult or in some cases even impossible. This paper presents a novel wavelet-based denoising method using two-channel signal recording and an adaptive cross-chann...

P. Varady

2001-01-01

35

Mean Square Error Properties of Density Estimates  

Microsoft Academic Search

The rate at which the mean square error decreases as sample size increases is evaluated for general $L^1$ kernel estimates and for the Fourier integral estimate for a probability density function. The estimates are then compared on the basis of these rates.

Kathryn Bullock Davis

1975-01-01

36

Comparative Analysis of Wavelet-Based Scale-Invariant Feature Extraction Using Different Wavelet Bases  

NASA Astrophysics Data System (ADS)

In this paper, we present comparative analysis of scale-invariant feature extraction using different wavelet bases. The main advantage of the wavelet transform is the multi-resolution analysis. Furthermore, wavelets enable localigation in both space and frequency domains and high-frequency salient feature detection. Wavelet transforms can use various basis functions. This research aims at comparative analysis of Daubechies, Haar and Gabor wavelets for scale-invariant feature extraction. Experimental results show that Gabor wavelets outperform better than Daubechies, Haar wavelets in the sense of both objective and subjective measures.

Lim, Joohyun; Kim, Youngouk; Paik, Joonki

37

A wavelet-based approximation of surface coil sensitivity profiles for correction of image intensity inhomogeneity and parallel imaging reconstruction  

Microsoft Academic Search

We evaluate a wavelet-based algorithm to estimate the coil sensitivity modulation from surface coils. This information is used to improve the image homogeneity of magnetic resonance imaging when a surface coil is used for reception, and to increase image encoding speed by reconstructing images from under-sampled (aliased) acquisitions using parallel magnetic resonance imaging (MRI) methods for higher spatiotemporal image resolutions.

Fa-Hsuan Lin; Ying-Jui Chen; John W. Belliveau; Lawrence L. Wald

2003-01-01

38

Towards Kernel Density Estimation over Streaming Data  

Microsoft Academic Search

A variety of real-world applications heavily relies on the analysis of transient data streams. Due to the rigid process- ing requirements of data streams, common analysis tech- niques as known from data mining are not applicable. A fundamental building block of many data mining and analysis approaches is density estimation. It provides a well-defined estimation of a continuous data distribution,

Christoph Heinz; Bernhard Seeger

2006-01-01

39

A patient-specific coronary density estimate  

Microsoft Academic Search

A reliable density estimate for the position of the coronary arteries in Computed Tomography (CT) data is beneficial for many coronary image processing applications, such as vessel tracking, lumen segmentations, and calcium scoring. This paper presents a method for obtaining an estimate of the coronary artery location in CT and CT angiography (CTA). The proposed method constructs a patient- specific

Rahil Khurram Shahzad; Michiel Schaap; Theo van Walsum; Stefan Klein; Annick C. Weustink; Lucas J. van Vliet; Wiro J. Niessen

2010-01-01

40

ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS  

EPA Science Inventory

An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

41

A wavelet based investigation of long memory in stock returns  

NASA Astrophysics Data System (ADS)

Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.

Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.

2012-04-01

42

Quantum statistical inference for density estimation  

SciTech Connect

A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inverse problems; the penalty function is a convex information divergence enforcing global smoothing toward default models, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing may be enforced by constraints on the expectation values of differential operators. Although the hyperparameters, covariance, and linear response to perturbations can be estimated by a variety of statistical methods, we develop the Bayesian interpretation. The linear response of the MAP estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood. The method is demonstrated on standard data sets.

Silver, R.N.; Martz, H.F.; Wallstrom, T.

1993-11-01

43

Estimating and Interpreting Probability Density Functions  

NSDL National Science Digital Library

This 294-page document from the Bank for International Settlements stems from the Estimating and Interpreting Probability Density Functions workshop held on June 14, 1999. The conference proceedings, which may be downloaded as a complete document or by chapter, are divided into two sections: "Estimation Techniques" and "Applications and Economic Interpretation." Both contain papers presented at the conference. Also included are a list of the program participants with their affiliations and email addresses, a forward, and background notes.

44

Information-Theoretically Optimal Histogram Density Estimation  

Microsoft Academic Search

Abstract We regard,histogram,density estimation,as a model,selection problem. Our approach,is based on the information-theoretic minimum description length (MDL) principle. MDLbased,model,selection is formalized,via the normalized,maximum,likelihood (NML) dis-

Petri Kontkanen; Petri Myllymaki

2006-01-01

45

Sampling, Density Estimation and Spatial Relationships  

NSDL National Science Digital Library

This resource serves as a tool used for instructing a laboratory exercise in ecology. Students obtain hands-on experience using techniques such as, mark-recapture and density estimation and organisms such as, zooplankton and fathead minnows. This exercise is suitable for general ecology and introductory biology courses.

Maggie Haag (University of Alberta;); William M. Tonn (;)

1998-01-01

46

The Maximal Smoothing Principle in Density Estimation  

Microsoft Academic Search

We propose a widely applicable method for choosing the smoothing parameters for nonparametric density estimators. It has come to be realized in recent years (e.g., see Hall and Marron 1987; Scott and Terrell 1987) that cross-validation methods for finding reasonable smoothing parameters from raw data are of very limited practical value. Their sampling variability is simply too large. The alternative

George R. Terrell

1990-01-01

47

Estimation and display of beam density profiles  

Microsoft Academic Search

A setup in which wire-scanner-type beam-profile monitor data are collected on-line in a nuclear data-acquisition system has been used and a simple algorithm for estimation and display of the current density distribution in a particle beam is described.

S. Dasgupta; T. Mukhopadhyay; A. Roy; C. Mallik

1989-01-01

48

DENSITY ESTIMATION FOR PROJECTED EXOPLANET QUANTITIES  

SciTech Connect

Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of 'projected' mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth {sigma}. We calibrate the dependence of {sigma} on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n{sup -log2}. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements-particularly on the minimum sample size n-for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

Brown, Robert A., E-mail: rbrown@stsci.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

2011-05-20

49

Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints  

SciTech Connect

Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.

I. W. Ginsberg

1999-09-01

50

Alamouti coded wavelet based OFDM for multipath fading channels  

Microsoft Academic Search

In this work, we examined the performance of conventional DFT based OFDM and wavelet based OFDM (WOFDM) with and without Alamouti coding over multipath Rayleigh fading channels with exponential power delay profile. Results show that WOFDM has slightly better bit error rate performance than the conventional OFDM with and without Alamouti code. Besides the performance improvement of WOFDM as compared

Volkan Kumbasar; Oguz Kucur

2009-01-01

51

An EM algorithm for wavelet-based image restoration  

Microsoft Academic Search

Abstract: This paper introduces an expectation--maximization(EM) algorithm for image restoration (deconvolution) based on apenalized likelihood formulated in the wavelet domain. Regularizationis achieved by promoting a reconstruction with low-complexity,expressed in the wavelet coefficients, taking advantage ofthe well known sparsity of wavelet representations. Previous workshave investigated wavelet-based restoration but, except for certainspecial cases, the resulting criteria are solved...

Mário A. T. Figueiredo; Robert D. Nowak

2003-01-01

52

Integrating a wavelet based perspiration liveness check with fingerprint recognition  

Microsoft Academic Search

It has been shown that fingerprint scanners can be deceived very easily, using simple, inexpensive techniques. In this work, a countermeasure against such attacks is enhanced, that utilizes a wavelet based approach to detect liveness, integrated with the fingerprint matcher. Liveness is determined from perspiration changes along the fingerprint ridges, observed only in live people. The proposed algorithm was applied

Aditya Abhyankar; Stephanie A. C. Schuckers

2009-01-01

53

Wavelet-based analysis of blood pressure dynamics in rats  

NASA Astrophysics Data System (ADS)

Using a wavelet-based approach, we study stress-induced reactions in the blood pressure dynamics in rats. Further, we consider how the level of the nitric oxide (NO) influences the heart rate variability. Clear distinctions for male and female rats are reported.

Pavlov, A. N.; Anisimov, A. A.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznecova, A. S.; Matasova, E. G.

2009-02-01

54

Wavelet-based pavement distress detection and evaluation  

Microsoft Academic Search

An automated pavement inspection system consists of image acquisition and distress image processing. The former is accomplished with imaging sensors, such as video cameras and photomultiplier tubes. The latter includes distress detection, isolation, classification, evaluation, segmentation, and compression. We focus on wavelet-based distress detection, isolation, and evaluation. After a pavement image is decomposed into different-frequency subbands by the wavelet transform,

Jian Zhou; Peisen S. Huang; Fu-Pen Chiang

2006-01-01

55

Wavelet-based denoising using hidden Markov models  

Microsoft Academic Search

Hidden Markov models have been used in a wide variety of wavelet-based statistical signal processing applications. Typically, Gaussian mixture distributions are used to model the wavelet coefficients and the correlation between the magnitudes of the wavelet coefficients within each scale and\\/or across the scales is captured by a Markov tree imposed on the (hidden) states of the mixture. This paper

M. Jaber Borran; Robert D. Nowak

2001-01-01

56

Wavelet-based face verification for constrained platforms  

Microsoft Academic Search

Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices

Harin Sellahewa; Sabah A. Jassim

2005-01-01

57

3D Wavelet-Based Filter and Method  

DOEpatents

A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

Moss, William C. (San Mateo, CA); Haase, Sebastian (San Francisco, CA); Sedat, John W. (San Francisco, CA)

2008-08-12

58

Wavelet-based lossless compression scheme with progressive transmission capability  

Microsoft Academic Search

Lossless image compression with progressive transmis- sion capabilities plays a key role in measurement applications, requir- ing quantitative analysis and involving large sets of images. This work proposes a wavelet-based compression scheme that is able to op- erate in the lossless mode. The quantization module implements a new technique for the coding of the wavelet coefficients that is more effective

Adrian Munteanu; Jan Cornelis; Geert Van der Auwera; Paul Cristea

1999-01-01

59

Wavelet Based Lossless Compression of Coronary Angiography Images  

Microsoft Academic Search

The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet

Adrian Munteanu; Jan Cornelis; Paul Cristea

1999-01-01

60

Stochastic model for estimation of environmental density  

SciTech Connect

The environment density has been defined as the value of a habitat expressing its unfavorableness for settling of an individual which has a strong anti-social tendency to other individuals in an environment. Morisita studied anti-social behavior of ant-lions (Glemuroides japanicus) and provided a recurrence relation without an explicit solution for the probability distribution of individuals settling in each of two habitats in terms of the environmental densities and the numbers of individuals introduced. In this paper the recurrence relation is explicitly solved; certain interesting properties of the distribution are discussed including the estimation of the parameters. 4 references, 1 table.

Janardan, K.G.; Uppuluri, V.R.R.

1984-01-01

61

Asymptotics for General Multivariate Kernel Density Derivative Estimators  

Microsoft Academic Search

We investigate general kernel density derivative estimators, that is, kernel estimators of multivariate density derivative functions using general (or unconstrained) bandwidth matrix selectors. These density derivative estimators have been relatively less well researched than their density estimator analogues. A major obstacle for progress has been the intractability of the matrix analysis when treating higher order multivariate derivatives. With an alternative

Jose E. Chacon; T. Duon; M. P. Wand

2009-01-01

62

Improved Astronomical Inferences via Nonparametric Density Estimation  

Microsoft Academic Search

Nonparametric and semiparametric approaches to density estimation can yield scientific insights unavailable when restrictive assumptions are made regarding the form of the distribution. Further, when a well-chosen dimension reduction technique is utilized, the distribution of high-dimensional data (e.g., spectra, images) can be characterized via a nonparametric approach. The hope is that these procedures will preserve a large amount of the

Chad Schafer

2010-01-01

63

Mineral Deposit Densities for Estimating Mineral Resources  

Microsoft Academic Search

Estimates of numbers of mineral deposits are fundamental to assessing undiscovered mineral resources. Just as frequencies\\u000a of grades and tonnages of well-explored deposits can be used to represent the grades and tonnages of undiscovered deposits,\\u000a the density of deposits (deposits\\/area) in well-explored control areas can serve to represent the number of deposits. Empirical\\u000a evidence presented here indicates that the processes

Donald A. Singer

2008-01-01

64

A Fast GEM Algorithm for Bayesian Wavelet-Based Image Restoration Using a Class of Heavy-Tailed Priors  

Microsoft Academic Search

The paper introduces modelling and optimization contribu- tions on a class of Bayesian wavelet-based image deconvolution problems. Main assumptions of this class are: 1) space-invariant blur and additive white Gaussian noise; 2) prior given by a linear (flnite of inflnite) de- composition of Gaussian densities. Many heavy-tailed priors on wavelet coe-cients of natural images admit this decomposition. To compute the

José M. Bioucas-dias; Torre Norte

2003-01-01

65

Estimates of the cosmological axion density  

NASA Astrophysics Data System (ADS)

In the scenario where the temperature in the early universe is high enough to restore the Peccei-Quinn symmetry, previous estimates of the axion mass density are reconsidered. The axion contribution to the density parameter is of the form ?a=K(fa/1012 GeV)1.2 where fa/sqrt(2) is the vacuum value of the magnitude of the Peccei-Quinn field. If axions radiated by strings are ignored, or if they have a flat spectrum as advocated by Harari and Sikivie, K~k?7/6, where ?-1<1 is the string spacing in Hubble units and k?1-10. If their spectrum is peaked at frequency ?/t as advocated by Davis, K~10k?7/6. If ?~1, the first estimate is in essential agreement with earlier ones but the second is a factor of 7 smaller, which widens the allowed window for fa. These estimates include only the axions coming from oscillations of the axion field between the walls. In addition there are axions radiated by the walls, whose relative contribution to K is of order r3/2 in the first case and 0.1r3/2 in the second case, r being the time of wall annihilation divided by the time of wall formation. It could dominate, and close the axion window, if the walls survive for many Hubble times.

Lyth, David H.

1992-01-01

66

Wavelet-based denoising by customized thresholding  

Microsoft Academic Search

The problem of estimating a signal that is corrupted by additive noise has been of interest to many researchers for practical, as well as theoretical, reasons. Many of the traditional denoising methods use linear methods such as Wiener filtering. Recently, nonlinear methods, especially those based on wavelets, have become increasingly popular, due to a number of advantages over the linear

Byung-Jun Yoon; P. P. Vaidyanathan

2004-01-01

67

Data-Based Resolution Selection in Positive Wavelet Density Estimation  

Microsoft Academic Search

The problem of automatic resolution selection for positive wavelet density estimators is considered. Asymptotic formulae for the integrated mean square error (IMSE) of positive wavelet density estimators are derived under a very mild condition on the density function f. Using the formula for IMSE, an asymptotically optimal empirical resolution selection rule is given. The consistency of the density estimator with

J. K. Ghorai; Dong Yu

2005-01-01

68

Estimating stellar mean density through seismic inversions  

NASA Astrophysics Data System (ADS)

Context. Determining the mass of stars is crucial both for improving stellar evolution theory and for characterising exoplanetary systems. Asteroseismology offers a promising way for estimating the stellar mean density. When combined with accurate radii determinations, such as are expected from Gaia, this yields accurate stellar masses. The main difficulty is finding the best way to extract the mean density of a star from a set of observed frequencies. Aims: We seek to establish a new method for estimating the stellar mean density, which combines the simplicity of a scaling law while providing the accuracy of an inversion technique. Methods: We provide a framework in which to construct and evaluate kernel-based linear inversions that directly yield the mean density of a star. We then describe three different inversion techniques (SOLA and two scaling laws) and apply them to the Sun, several test cases and three stars, ? Cen B, HD 49933 and HD 49385, two of which are observed by CoRoT. Results: The SOLA (subtractive optimally localised averages) approach and the scaling law based on the surface correcting technique described by Kjeldsen et al. (2008, ApJ, 683, L175) yield comparable results that can reach an accuracy of 0.5% and are better than scaling the large frequency separation. The reason for this is that the averaging kernels from the two first methods are comparable in quality and are better than what is obtained with the large frequency separation. It is also shown that scaling the large frequency separation is more sensitive to near-surface effects, but is much less affected by an incorrect mode identification. As a result, one can identify pulsation modes by looking for an ? and n assignment which provides the best agreement between the results from the large frequency separation and those from one of the two other methods. Non-linear effects are also discussed, as is the effects of mixed modes. In particular, we show that mixed modes bring little improvement to the mean density estimates because of their poorly adapted kernels.

Reese, D. R.; Marques, J. P.; Goupil, M. J.; Thompson, M. J.; Deheuvels, S.

2012-03-01

69

Wavelet-Based Off-Line Handwritten Signature Verification  

Microsoft Academic Search

In this paper, a wavelet-based off-line handwritten signature verification system is proposed. The proposed system can automatically identify useful and common features which consistently exist within different signatures of the same person and, based on these features, verify whether a signature is a forgery or not. The system starts with a closed-contour tracing algorithm. The curvature data of the traced

Peter Shaohua Deng; Hong-yuan Mark Liao; Chin Wen Ho; Hsiao-rong Tyan

1999-01-01

70

Wavelet-based scalable L-infinity-oriented compression  

Microsoft Academic Search

Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded -ori- ented compression, or, at most, provide a very limited number of potential bit-stream truncation points. We propose a new multidimensional wavelet-based -constrained scalable coding framework that generates a fully embedded -oriented

Alin Alecu; Adrian Munteanu; Jan P. H. Cornelis; Peter Schelkens

2006-01-01

71

Multiresolution seismic data fusion with a generalized wavelet-based method to derive subseabed acoustic properties  

NASA Astrophysics Data System (ADS)

In the context of multiscale seismic analysis of complex reflectors, that takes benefit from broad-band frequency range considerations, we perform a wavelet-based method to merge multiresolution seismic sources based on generalized Lévy-alpha stable functions. The frequency bandwidth limitation of individual seismic sources induces distortions in wavelet responses (WRs), and we show that Gaussian fractional derivative functions are optimal wavelets to fully correct for these distortions in the merged frequency range. The efficiency of the method is also based on a new wavelet parametrization, that is the breadth of the wavelet, where the dominant dilation is adapted to the wavelet formalism. As a first demonstration to merge multiresolution seismic sources, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed device and we show that both can now be perfectly merged into an equivalent seismic source with a broad-band frequency bandwidth (220-2200 Hz). Taking advantage of this new multiresolution seismic data fusion, the potential of the generalized wavelet-based method allows reconstructing the acoustic impedance profile of the subseabed, based on the inverse wavelet transform properties extended to the source-corrected WR. We highlight that the fusion of seismic sources improves the resolution of the impedance profile and that the density structure of the subseabed can be assessed assuming spatially homogeneous large scale features of the subseabed physical properties.

Ker, S.; Le Gonidec, Y.; Gibert, D.

2013-11-01

72

Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.  

PubMed

Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results. PMID:22887117

Paul, Sabyasachi; Sarkar, P K

2012-08-11

73

Normalization of wood density in biomass estimates of Amazon forests  

Microsoft Academic Search

Wood density is an important variable in estimates of biomass and carbon flux in tropical regions. However, the Amazon region lacks large-scale wood-density datasets that employ a sampling methodology adequate for use in estimates of biomass and carbon emissions. Normalization of the available datasets is needed to avoid bias in estimates that combine previous studies of wood density that used

Euler Melo Nogueira; Philip Martin Fearnside; Bruce Walker Nelson

2008-01-01

74

Wavelet-based moment invariants for pattern recognition  

NASA Astrophysics Data System (ADS)

Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

Chen, Guangyi; Xie, Wenfang

2011-07-01

75

Bayesian network classification using spline-approximated kernel density estimation  

Microsoft Academic Search

The likelihood for patterns of continuous features needed for probabilistic inference in a Bayesian network classifier (BNC) may be computed by kernel density estimation (KDE), letting every pattern influence the shape of the probability density. Although usually leading to accurate estimation, the KDE suffers from computational cost making it unpractical in many real-world applications. We smooth the density using a

Yaniv Gurwicz; Boaz Lerner

2005-01-01

76

Variability in Benthic Invertebrate Density Estimates from Stream Samples  

Microsoft Academic Search

The problem of variability in benthic density estimates from stream sampling was reassessed using data from a number of stream studies, including our own. When data were analyzed assuming a negative binomial distribution and a tolerated precision of 40% of the mean, it was apparent that earlier estimates of hundreds of samples required to estimate density were inaccurate. Based on

Steven P. Canton; James W. Chadwick

1988-01-01

77

A wavelet based technique for suppression of EMG noise and motion artifact in ambulatory ECG.  

PubMed

A wavelet-based denoising technique is investigated for suppressing EMG noise and motion artifact in ambulatory ECG. EMG noise is reduced by thresholding the wavelet coefficients using an improved thresholding function combining the features of hard and soft thresholding. Motion artifact is reduced by limiting the wavelet coefficients. Thresholds for both the denoising steps are estimated using the statistics of the noisy signal. Denoising of simulated noisy ECG signals resulted in an average SNR improvement of 11.4 dB, and its application on ambulatory ECG recordings resulted in L(2) norm and max-min based improvement indices close to one. It significantly improved R-peak detection in both the cases. PMID:22255971

Mithun, P; Pandey, Prem C; Sebastian, Toney; Mishra, Prashant; Pandey, Vinod K

2011-01-01

78

Wavelet-Based Passivity Preserving Model Order Reduction for Wideband Interconnect Characterization  

Microsoft Academic Search

Model order reduction plays a key role in determining VLSI system performance and the optimization of intercon- nects. In this paper, we develop an accurate and provably passive method for model order reduction using adaptive wavelet-based frequency selective projection. The wavelet- based approach provides an automated means to generate low order models that are accurate in a particular range of

Mehboob Alam; Arthur Nieuwoudt; Yehia Massoud

2007-01-01

79

A wavelet-based stochastic finite element method of thin plate bending  

Microsoft Academic Search

A wavelet-based stochastic finite element method is presented for the bending analysis of thin plates. The wavelet scaling functions of spline wavelets are selected to construct the displacement interpolation functions of a rectangular thin plate element and the displacement shape functions are expressed by the spline wavelets. A new wavelet-based finite element formulation of thin plate bending is developed by

Jian-Gang Han; Wei-Xin Ren; Yih Huang

2007-01-01

80

SMALL-MAMMAL DENSITY ESTIMATION: A FIELD COMPARISON OF GRID-BASED VS. WEB-BASED DENSITY ESTIMATORS  

Microsoft Academic Search

Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture-recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of

Robert R. Parmenter; Terry L. Yates; David R. Anderson; Kenneth P. Burnham; Jonathan L. Dunnum; Alan B. Franklin; Michael T. Friggens; Bruce C. Lubow; Michael Miller; Gail S. Olson; Cheryl A. Parmenter; John Pollard; Eric Rexstad; Tanya M. Shenk; Thomas R. Stanley; Gary C. White

2003-01-01

81

Comparison of methods for estimating density of forest songbirds ...  

Treesearch

We compared estimates of detection probability and density from distance and time-removal ... The choice of a method may not affect the use of estimates for relative ... distance sampling, survey protocol, time-removal sampling, upland forest.

82

A second generation wavelet based finite elements on triangulations  

NASA Astrophysics Data System (ADS)

In this paper we have developed a second generation wavelet based finite element method for solving elliptic PDEs on two dimensional triangulations using customized operator dependent wavelets. The wavelets derived from a Courant element are tailored in the second generation framework to decouple some elliptic PDE operators. Starting from a primitive hierarchical basis the wavelets are lifted (enhanced) to achieve local scale-orthogonality with respect to the operator of the PDE. The lifted wavelets are used in a Galerkin type discretization of the PDE which result in a block diagonal, sparse multiscale stiffness matrix. The blocks corresponding to different resolutions are completely decoupled, which makes the implementation of new wavelet finite element very simple and efficient. The solution is enriched adaptively and incrementally using finer scale wavelets. The new procedure completely eliminates wastage of resources associated with classical finite element refinement. Finally some numerical experiments are conducted to analyze the performance of this method.

Quraishi, S. M.; Sandeep, K.

2011-08-01

83

A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring  

SciTech Connect

Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

Liao, T. W. [Louisiana State University; Ting, C.F. [Louisiana State University; Qu, Jun [ORNL; Blau, Peter Julian [ORNL

2007-01-01

84

Wavelet-based multifractal analysis of laser biopsy imagery  

NASA Astrophysics Data System (ADS)

In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.

Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima

2012-02-01

85

Density estimation using the trapping web design: A geometric analysis  

USGS Publications Warehouse

Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

Link, W.A.; Barker, R. J.

1994-01-01

86

M-Kernel Merging: Towards Density Estimation over Data Streams  

Microsoft Academic Search

Density estimation is a costly operation for computing distribution information of data sets underlying many im- portant data mining applications, such as clustering and biased sampling. However, traditional density estimation methods are inapplicable for streaming data, which are continuous arriving large volume of data, because of their request for linear storage and square size calculation. The shortcoming limits the application

Aoying Zhou; Zhiyuan Cai; Li Wei; Weining Qian

2003-01-01

87

Spatio-Temporal Photon Density Estimation Using Bilateral Filtering  

Microsoft Academic Search

Photon tracing and density estimation are well es- tablished techniques in global illumination compu- tation and rendering of high-quality animation se- quences. Using traditional density estimation tech- niques it is difficult to remove stochastic noise in- herent for photon-based methods while avoiding overblurring lighting details. In this paper we in- vestigate the use of bilateral filtering for lighting reconstruction based

Markus Weber; Marco Milch; Karol Myszkowski; Kirill Dmitriev; Przemyslaw Rokita; Hans-peter Seidel

2004-01-01

88

Biased and Unbiased Cross-Validation in Density Estimation  

Microsoft Academic Search

Nonparametric density estimation requires the specification of smoothing parameters. The demands of statistical objectivity make it highly desirable to base the choice on properties of the data set. In this article we introduce some biased cross-validation criteria for selection of smoothing parameters for kernel and histogram density estimators, closely related to one investigated in Scott and Factor (1981). These criteria

David W. Scott; George R. Terrell

1987-01-01

89

Shared kernel models for class conditional density estimation  

Microsoft Academic Search

We present probabilistic models which are suitable for class conditional density estimation and can be regarded as shared kernel models where sharing means that each kernel may contribute to the estimation of the conditional densities of an classes. We first propose a model that constitutes an adaptation of the classical radial basis function (RBF) network (with full sharing of kernels

Michalis K. Titsias; Aristidis C. Likas

2001-01-01

90

Fiber density estimation by tensor divergence.  

PubMed

Diffusion-sensitized magnetic resonance imaging provides information about the fibrous structure of the human brain. However, this information is not sufficient to reconstruct the underlying fiber network, because the nature of diffusion provides only conditional fiber densities. That is, it is possible to infer the percentage of bundles that pass a voxel with a certain direction, but the absolute number of fibers is inaccessible. In this work we propose a conservation equation for tensor fields that can infer this number up to a factor. Simulations on synthetic phantoms show that the approach is able to derive the densities correctly for various configurations. In-vivo results on 20 healthy volunteers are plausible and consistent, while a rigorous evaluation is difficult, because conclusive data from both MRI and histology remain elusive even on the most studied brain structures. PMID:23286061

Reisert, Marco; Skibbe, Henrik; Kiselev, Valerij G

2012-01-01

91

Blind Source Separation Based on Nonparametric Density Estimation  

Microsoft Academic Search

A nonparametric density estimation method is used to directly estimate the score functions encountered in relative gradient (or natural gradient) adaptation algorithms in the blind source separation problem. Compared to the method where simple nonlinear functions are used to replace the unknown score functions, the key advantage of the direct estimation of the score functions lies in the fact that

Peng Jia; Hong-Yuan Zhang; Xi-Zhi Shi

2003-01-01

92

Wavelet-based image fusion and quality assessment  

NASA Astrophysics Data System (ADS)

Recent developments in satellite and sensor technologies have provided high-resolution satellite images. Image fusion techniques can improve the quality, and increase the application of these data. This paper addresses two issues in image fusion (a) the image fusion method and (b) corresponding quality assessment. Firstly, a multi-band wavelet-based image fusion method is presented, which is a further development of the two-band wavelet transformation. This fusion method is then applied to a case study to demonstrate its performance in image fusion. Secondly, quality assessment for fused images is discussed. The objectives of image fusion include enhancing the visibility of the image and improving the spatial resolution and the spectral information of the original images. For assessing quality of an image after fusion, we define the aspects to be assessed initially. These include, for instance, spatial and spectral resolution, quantity of information, visibility, contrast, or details of features of interest. Quality assessment is application dependant; different applications may require different aspects of image quality. Based on this analysis, a set of qualities is classified and analyzed. These sets of qualities include (a) average grey value, for representing intensity of an image, (b) standard deviation, information entropy, profile intensity curve for assessing details of fused images, and (c) bias and correlation coefficient for measuring distortion between the original image and fused image in terms of spectral information.

Shi, Wenzhong; Zhu, Changqing; Tian, Yan; Nichol, Janet

2005-03-01

93

Wavelet-based illumination invariant preprocessing in face recognition  

NASA Astrophysics Data System (ADS)

Performance of a contemporary two-dimensional face-recognition system has not been satisfied due to the variation in lighting. As a result, many works of solving illumination variation in face recognition have been carried out in past decades. Among them, the Illumination-Reflectance model is one of the generic models that is used to separate the individual reflectance and illumination components of an object. The illumination component can be removed by means of image-processing techniques to regain an intrinsic face feature, which is depicted by the reflectance component. We present a wavelet-based illumination invariant algorithm as a preprocessing technique for face recognition. On the basis of the multiresolution nature of wavelet analysis, we decompose both illumination and reflectance components from a face image in a systematic way. The illumination component wherein resides in the low-spatial-frequency subband can be eliminated efficiently. This technique works out very advantageously for achieving higher recognition performance on YaleB, CMU PIE, and FRGC face databases.

Goh, Yi Zheng; Teoh, Andrew Beng Jin; Goh, Kah Ong Michael

2009-04-01

94

An image adaptive, wavelet-based watermarking of digital images  

NASA Astrophysics Data System (ADS)

In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

2007-12-01

95

Wavelet-based face verification for constrained platforms  

NASA Astrophysics Data System (ADS)

Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.

Sellahewa, Harin; Jassim, Sabah A.

2005-03-01

96

A Technique for Estimating Marginal Posterior Densities in Hierarchical Models Using Mixtures of Conditional Densities  

Microsoft Academic Search

A technique called quantile integration is proposed for the estimation of marginal posterior densities arising in Bayesian models having hierarchical representations. The method is based on approximating marginal densities as mixtures of conditional densities, where the conditioning variables are selected deterministically from the mixing distributions. The form of the approximation makes it easy to implement, and the resulting approximations are

Valen E. Johnson

1992-01-01

97

Estimating Geometric Dislocation Densities in Polycrystalline Materialsfrom Orientation Imaging Microscopy  

SciTech Connect

Herein we consider polycrystalline materials which can be taken as statistically homogeneous and whose grains can be adequately modeled as rigid-plastic. Our objective is to obtain, from orientation imaging microscopy (OIM), estimates of geometrically necessary dislocation (GND) densities.

Man, Chi-Sing [University of Kentucky; Gao, Xiang [University of Kentucky; Godefroy, Scott [University of Kentucky; Kenik, Edward A [ORNL

2010-01-01

98

Biased and Unbiased Cross-Validation in Density Estimation.  

National Technical Information Service (NTIS)

Nonparametric density estimation requires the specification of smoothing parameters. The demands of statistical objectivity make it highly desirable to base the choice on properties of the data set. This paper introduces some biased cross-validation crite...

D. W. Scott G. R. Terrell

1986-01-01

99

Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.  

PubMed

Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired. PMID:23669913

Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

2013-05-01

100

Tractable Multivariate Binary Density Estimation and the Restricted Boltzmann Forest  

Microsoft Academic Search

We investigate the problem of estimating the density function of multivari- ate binary data. In particular, we focus on models for which computing the esti- mated probability of any data point is tractable. We argue that, even in its tractable regime, the Restricted Boltzmann Machine (RBM) provides a competitive frame- work for multivariate binary density modeling. With this in mind,

Hugo Larochelle; Yoshua Bengio; Joseph P. Turian

2010-01-01

101

Estimation of risk-neutral densities using positive convolution approximation  

Microsoft Academic Search

This paper proposes a new nonparametric method for estimating the conditional risk-neutral density (RND) from a cross-section of option prices. The idea of the method is to fit option prices by finding the optimal density in a special admissible set. The admissible set consists of functions, each of which may be represented as a convolution of a positive kernel with

Oleg Bondarenko

2003-01-01

102

Kernel Density Estimation of traffic accidents in a network space  

Microsoft Academic Search

A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However, the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to

Zhixiao Xie; Jun Yan

2008-01-01

103

Model parameter estimation for mixture density polynomial segment models  

Microsoft Academic Search

In this paper, we propose parameter estimation techniques for mixture density polynomial segment models (MDPSMs) where their trajectories are specified with an arbitrary regression order. MDPSM parameters can be trained in one of three different ways: (1) segment clustering; (2) expectation maximization (EM) training of mean trajectories; and (3) EM training of mean and variance trajectories. These parameter estimation methods

Toshiaki Fukada; Kuldip K. Paliwal; Yoshinori Sagisaka

1998-01-01

104

Model parameter estimation for mixture density polynomial segment models  

Microsoft Academic Search

In this paper, we propose parameter estimation techniques for mixture density polynomial segment models (MDPSMs) where their trajectories are specified with an arbitrary regression order. MDPSM parameters can be trained in one of three diVerent ways: (1) segment clustering; (2) expectation maximization (EM) training of mean trajectories; and (3) EM training of mean and variance trajectories. These parameter estimation methods

T. Fukada; K. K. Paliwal; Y. Sagisaka

1998-01-01

105

Conditional Density Estimation with HMM Based Support Vector Machines  

NASA Astrophysics Data System (ADS)

Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

106

The estimation of body density in rugby union football players.  

PubMed Central

The general regression equation of Durnin and Womersley for estimating body density from skinfold thicknesses in young men, was examined by comparing the estimated density from this equation, with the measured density of a group of 45 rugby union players of similar age. Body density was measured by hydrostatic weighing with simultaneous measurement of residual volume. Additional measurements included stature, body mass and skinfold thicknesses at the biceps, triceps, subscapular and suprailiac sites. The estimated density was significantly different from the measured density (P < 0.001), equivalent to a mean overestimation of relative fat of approximately 4%. A new set of prediction equations for estimating density was formulated from linear regression using the logarithm of single and sums of skinfold thicknesses. Equations were derived from a validation sample (n = 22) and tested on a crossvalidation sample (n = 23). The standard error of the estimate (s.e.e.) of the equations ranged from 0.0058 to 0.0062 g ml-1. The derived equations were successfully crossvalidated. Differences between measured and estimated densities were not significant (P > 0.05), total errors ranging from 0.0067 to 0.0092 g ml-1. An exploratory assessment was also made of the effect of fatness and aerobic fitness on the prediction equations. The equations should be applied to players of similar age and playing ability, and for the purpose of identifying group characteristics. Application of the equations to individuals may give rise to errors of between -3.9% to +2.5% total body fat in two-thirds of cases.

Bell, W

1995-01-01

107

Wavelet-Based Analysis of the Non-Stationary Response of a Slipping Foundation  

NASA Astrophysics Data System (ADS)

A wavelet-based stochastic formulation has been presented in this paper for the seismic analysis of a rigid block resting on a friction base. The ground motion has been modelled as a non-stationary process (both in amplitude and frequency) by using wavelets. The proposed formulation is based on replacing the non-linear system by an equivalent linear system with time-dependent properties. The expressions of the instantaneous damping, root mean square (r.m.s.) velocity response, and the power spectral density function (PSDF) of the velocity response have been obtained in terms of the input wavelet coefficients. For validation of the formulation, simulation based on twenty synthetically generated time-histories corresponding to an example ground motion process has been carried out. The effectiveness of the base-isolation system and the effect of the frequency non-stationarity on the non-linear response have also been studied in detail. It has been clearly shown how the frequency non-stationarity in the ground motion changes the non-linear response.

Basu, B.; Gupta, V. K.

1999-05-01

108

Atmospheric Density Corrections Estimated from Fitted Drag Coefficients  

NASA Astrophysics Data System (ADS)

Fitted drag coefficients estimated using GEODYN, the NASA Goddard Space Flight Center Precision Orbit Determination and Geodetic Parameter Estimation Program, are used to create density corrections. The drag coefficients were estimated for Stella, Starlette and GFZ using satellite laser ranging (SLR) measurements; and for GEOSAT Follow-On (GFO) using SLR, Doppler, and altimeter crossover measurements. The data analyzed covers years ranging from 2000 to 2004 for Stella and Starlette, 2000 to 2002 and 2005 for GFO, and 1995 to 1997 for GFZ. The drag coefficient was estimated every eight hours. The drag coefficients over the course of a year show a consistent variation about the theoretical and yearly average values that primarily represents a semi-annual/seasonal error in the atmospheric density models used. The atmospheric density models examined were NRLMSISE-00 and MSIS-86. The annual structure of the major variations was consistent among all the satellites for a given year and consistent among all the years examined. The fitted drag coefficients can be converted into density corrections every eight hours along the orbit of the satellites. In addition, drag coefficients estimated more frequently can provide a higher frequency of density correction.

McLaughlin, C. A.; Lechtenberg, T. F.; Mance, S. R.; Mehta, P.

2010-12-01

109

Computerized image analysis: estimation of breast density on mammograms  

NASA Astrophysics Data System (ADS)

An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.

Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

2000-06-01

110

A wavelet-based noise reduction algorithm and its clinical evaluation in cochlear implants.  

PubMed

Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise. PMID:24086605

Ye, Hua; Deng, Guang; Mauger, Stefan J; Hersbach, Adam A; Dawson, Pam W; Heasman, John M

2013-09-26

111

A Wavelet-Based Noise Reduction Algorithm and Its Clinical Evaluation in Cochlear Implants  

PubMed Central

Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise.

Ye, Hua; Deng, Guang; Mauger, Stefan J.; Hersbach, Adam A.; Dawson, Pam W.; Heasman, John M.

2013-01-01

112

Optimal reconstruction of non-symmetric travel time density distributions using a new kernel density estimator  

NASA Astrophysics Data System (ADS)

For typical solute transport applications using particle tracking algorithms, models are run with a limited number of particles and the estimation of the travel time density becomes an error-prone problem. Densities are however needed in groundwater applications, for instance to understand mixing, reactions and other type of phenomena occurring in the subsurface. Kernel density estimators (KDE) provide a convenient manner to reconstruct densities from travel time distributions and can be efficiently applied to reconstruct concentration gradients. KDE methods are based on an optimized smoothing algorithm, which improves the estimation of concentrations with respects to traditional methods such as histograms or other traditional approaches. A limitation of classical KDE methods is that numerical fluctuations occur in the regions where particles are scarce, which are on the other hand where concentration mass need to be more accurately estimated. We propose a new KDE method which automatically improves the estimation of particle travel time densities in the low-particle regions. Our solution allows one to obtain a better reconstruction of the density function without having to increase the number of particles and is especially good for estimating non-Fickian breakthrough curves with pronounced tailing.

Pedretti, Daniele; Fernandez-Garcia, Daniel

2013-04-01

113

NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES.  

PubMed

We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ? ?(d) and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e(-y) for y ? ?; in this case, the resulting class of densities [Formula: see text]is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class.We first investigate when the maximum likelihood estimator p? exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y).We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e(-y)) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x(0) under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877

Seregin, Arseni; Wellner, Jon A

2010-12-01

114

NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES  

PubMed Central

We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ? ?d and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e?y for y ? ?; in this case, the resulting class of densities P(e?y)={p=exp(?g):gis convex}is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p? exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e?y) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x0 under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis.

Seregin, Arseni; Wellner, Jon A.

2011-01-01

115

Neutral wind estimation from 4-D ionospheric electron density images  

NASA Astrophysics Data System (ADS)

We develop a new inversion algorithm for Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE method uses four-dimensional images of global electron density to estimate the field-aligned neutral wind ionospheric driver when direct measurement is not available. We begin with a model of the electron continuity equation that includes production and loss rate estimates, as well as E × B drift, gravity, and diffusion effects. We use ion, electron, and neutral species temperatures and neutral densities from the Thermosphere Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM-ASPEN) for estimating the magnitude of these effects. We then model the neutral wind as a power series at a given longitude for a range of latitudes and altitudes. As a test of our algorithm, we have input TIMEGCM electron densities to our algorithm. The model of the neutral wind is computed at hourly intervals and validated by comparing to the “true” TIMEGCM neutral wind fields. We show results for a storm day: 10 November 2004. The agreement between the winds derived from EMPIRE versus the TIMEGCM “true” winds appears to be time-dependent for the day under consideration. This may indicate that the diurnal variation in certain driving processes impacts the accuracy of our neutral wind model. Despite the potential temporal and spatial limits on accuracy, estimating neutral wind speed from measured electron density fields via our algorithm shows great promise as a complement to the more sparse radar and satellite measurements.

Datta-Barua, S.; Bust, G. S.; Crowley, G.; Curtis, N.

2009-06-01

116

Density-ratio robustness in dynamic state estimation  

NASA Astrophysics Data System (ADS)

The filtering problem is addressed by taking into account imprecision in the knowledge about the probabilistic relationships involved. Imprecision is modelled in this paper by a particular closed convex set of probabilities that is known with the name of density ratio class or constant odds-ratio (COR) model. The contributions of this paper are the following. First, we shall define an optimality criterion based on the squared-loss function for the estimates derived from a general closed convex set of distributions. Second, after revising the properties of the density ratio class in the context of parametric estimation, we shall extend these properties to state estimation accounting for system dynamics. Furthermore, for the case in which the nominal density of the COR model is a multivariate Gaussian, we shall derive closed-form solutions for the set of optimal estimates and for the credible region. Third, we discuss how to perform Monte Carlo integrations to compute lower and upper expectations from a COR set of densities. Then we shall derive a procedure that, employing Monte Carlo sampling techniques, allows us to propagate in time both the lower and upper state expectation functionals and, thus, to derive an efficient solution of the filtering problem. Finally, we empirically compare the proposed estimator with the Kalman filter. This shows that our solution is more robust to the presence of modelling errors in the system and that, hence, appears to be a more realistic approach than the Kalman filter in such a case.

Benavoli, Alessio; Zaffalon, Marco

2013-05-01

117

Density estimation by mixture models with smoothing priors  

PubMed

In the statistical approach for self-organizing maps (SOMs), learning is regarded as an estimation algorithm for a gaussian mixture model with a gaussian smoothing prior on the centroid parameters. The values of the hyperparameters and the topological structure are selected on the basis of a statistical principle. However, since the component selection probabilities are fixed to a common value, the centroids concentrate on areas with high data density. This deforms a coordinate system on an extracted manifold and makes smoothness evaluation for the manifold inaccurate. In this article, we study an extended SOM model whose component selection probabilities are variable. To stabilize the estimation, a smoothing prior on the component selection probabilities is introduced. An estimation algorithm for the parameters and the hyperparameters based on empirical Bayesian inference is obtained. The performance of density estimation by the new model and the SOM model is compared via simulation experiments. PMID:9804674

Utsugi

1998-11-15

118

Density Estimation in Several Populations With Uncertain Population Membership  

PubMed Central

We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study.

Ma, Yanyuan; Hart, Jeffrey D.; Carroll, Raymond J.

2012-01-01

119

Density Estimation in Several Populations With Uncertain Population Membership.  

PubMed

We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study. PMID:22368314

Ma, Yanyuan; Hart, Jeffrey D; Carroll, Raymond J

2011-09-01

120

Power spectral density estimation of randomly sampled partial discharge signals  

Microsoft Academic Search

Results of investigations performed in order to verify whether the Power Spectral Density Function (PSDF), estimated from a sample of Partial Discharge (PD) pulse signals, can summarize the spectral characteristics of the whole population of discharge pulses generated by a PD phenomenon, are reported. The PSDF of sequences of PD pulses produced by either corona or surface discharges, taken as

Alfkedo Contin; GianCarlo Montanari; Andrea Cavallini

1998-01-01

121

Multi-pass Density Estimation for Infrared Rendering  

Microsoft Academic Search

The density estimation methods are known to be among the most promising for providing realistic images from 3D scenes. However, in most of these methods, the direct illumi- nation is computed using a raytracing pass which samples each light source. This involves a limitation on the num- ber of light sources that can be handled. This limitation can be removed

Antoine Boudet; Mathias Paulin; Paul Pitot; David Pratmarty

122

Practical Bayesian Density Estimation Using Mixtures Of Normals  

Microsoft Academic Search

this paper, wepropose some solutions to these problems. Our goal is to come up with a simple, practicalmethod for estimating the density. This is an interesting problem in its own right, as wellas a first step towards solving other inference problems, such as providing more flexibledistributions in hierarchical models.To see why the posterior is improper under the usual reference prior,

Kathryn Roeder

1995-01-01

123

A smooth ROC curve estimator based on log-concave density estimates.  

PubMed

We introduce a new smooth estimator of the ROC curve based on log-concave density estimates of the constituent distributions. We show that our estimate is asymptotically equivalent to the empirical ROC curve if the underlying densities are in fact log-concave. In addition, we empirically show that our proposed estimator exhibits an efficiency gain for finite sample sizes with respect to the standard empirical estimate in various scenarios and that it is only slightly less efficient, if at all, compared to the fully parametric binormal estimate in case the underlying distributions are normal. The estimator is also quite robust against modest deviations from the logconcavity assumption. We show that bootstrap confidence intervals for the value of the ROC curve at a fixed false positive fraction based on the new estimate are on average shorter compared to the approach by Zhou and Qin (2005), while maintaining coverage probability. Computation of our proposed estimate uses the R package logcondens that implements univariate log-concave density estimation and can be done very efficiently using only one line of code. These obtained results lead us to advocate our estimate for a wide range of scenarios. PMID:22611590

Rufibach, Kaspar

2012-01-01

124

Learning of fuzzy cognitive maps using density estimate.  

PubMed

Fuzzy cognitive maps (FCMs) are convenient and widely used architectures for modeling dynamic systems, which are characterized by a great deal of flexibility and adaptability. Several recent works in this area concern strategies for the development of FCMs. Although a few fully automated algorithms to learn these models from data have been introduced, the resulting FCMs are structurally considerably different than those developed by human experts. In particular, maps that were learned from data are much denser (with the density over 90% versus about 40% density of maps developed by humans). The sparseness of the maps is associated with their interpretability: the smaller the number of connections is, the higher is the transparency of the map. To this end, a novel learning approach, sparse real-coded genetic algorithms (SRCGAs), to learn FCMs is proposed. The method utilizes a density parameter to guide the learning toward a formation of maps of a certain predefined density. Comparative tests carried out for both synthetic and real-world data demonstrate that, given a suitable density estimate, the SRCGA method significantly outperforms other state-of-the-art learning methods. When the density estimate is unknown, the new method can be used in an automated fashion using a default value, and it is still able to produce models whose performance exceeds or is equal to the performance of the models generated by other methods. PMID:22345544

Stach, Wojciech; Pedrycz, Witold; Kurgan, Lukasz A

2012-02-14

125

Resampling methods for improved wavelet-based multiple hypothesis testing of parametric maps in functional MRI.  

PubMed

Two- or three-dimensional wavelet transforms have been considered as a basis for multiple hypothesis testing of parametric maps derived from functional magnetic resonance imaging (fMRI) experiments. Most of the previous approaches have assumed that the noise variance is equally distributed across levels of the transform. Here we show that this assumption is unrealistic; fMRI parameter maps typically have more similarity to a 1/f-type spatial covariance with greater variance in 2D wavelet coefficients representing lower spatial frequencies, or coarser spatial features, in the maps. To address this issue we resample the fMRI time series data in the wavelet domain (using a 1D discrete wavelet transform [DWT]) to produce a set of permuted parametric maps that are decomposed (using a 2D DWT) to estimate level-specific variances of the 2D wavelet coefficients under the null hypothesis. These resampling-based estimates of the "wavelet variance spectrum" are substituted in a Bayesian bivariate shrinkage operator to denoise the observed 2D wavelet coefficients, which are then inverted to reconstitute the observed, denoised map in the spatial domain. Multiple hypothesis testing controlling the false discovery rate in the observed, denoised maps then proceeds in the spatial domain, using thresholds derived from an independent set of permuted, denoised maps. We show empirically that this more realistic, resampling-based algorithm for wavelet-based denoising and multiple hypothesis testing has good Type I error control and can detect experimentally engendered signals in data acquired during auditory-linguistic processing. PMID:17651989

Sendur, Levent; Suckling, John; Whitcher, Brandon; Bullmore, Ed

2007-06-14

126

Resampling methods for improved wavelet-based multiple hypothesis testing of parametric maps in functional MRI  

PubMed Central

Two- or three-dimensional wavelet transforms have been considered as a basis for multiple hypothesis testing of parametric maps derived from functional magnetic resonance imaging (fMRI) experiments. Most of the previous approaches have assumed that the noise variance is equally distributed across levels of the transform. Here we show that this assumption is unrealistic; fMRI parameter maps typically have more similarity to a 1/f-type spatial covariance with greater variance in 2D wavelxet coefficients representing lower spatial frequencies, or coarser spatial features, in the maps. To address this issue we resample the fMRI time series data in the wavelet domain (using a 1D discrete wavelet transform [DWT]) to produce a set of permuted parametric maps that are decomposed (using a 2D DWT) to estimate level-specific variances of the 2D wavelet coefficients under the null hypothesis. These resampling-based estimates of the “wavelet variance spectrum” are substituted in a Bayesian bivariate shrinkage operator to denoise the observed 2D wavelet coefficients, which are then inverted to reconstitute the observed, denoised map in the spatial domain. Multiple hypothesis testing controlling the false discovery rate in the observed, denoised maps then proceeds in the spatial domain, using thresholds derived from an independent set of permuted, denoised maps. We show empirically that this more realistic, resampling-based algorithm for wavelet-based denoising and multiple hypothesis testing has good Type I error control and can detect experimentally engendered signals in data acquired during auditory-linguistic processing.

Sendur, Levent; Suckling, John; Whitcher, Brandon; Bullmore, Ed

2008-01-01

127

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging  

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

128

Remarks on Some Recursive Estimators of a Probability Density  

Microsoft Academic Search

The density estimator, $f^\\\\ast_n(x) = n^{-1}\\\\sum^n_{j = 1}h^{-1}_jK((x - X_j)\\/h_j)$, as well as the closely related one $f^\\\\dagger_n(x) = n^{-1}h_n^{-\\\\frac{1}{2}}\\\\sum^n_{j = 1}h_j^{-\\\\frac{1}{2}}K((x - X_j)\\/h_j)$ are considered. Expressions for asymptotic bias and variance are developed. Using the almost sure invariance principle, laws of the iterated logarithm are developed. Finally, illustration of these results with sequential estimation procedures are made.

Edward J. Wegman; H. I. Davies

1979-01-01

129

Ridge-line density estimation in digital images  

Microsoft Academic Search

This paper introduces a new efficient method for estimating the local ridge-line density in digital images. A mathematical characterization of the local frequency of sinusoidal signals is given, and a 2D-model is developed in order to approximate the ridge-line patterns. Experimental results obtained through a discrete implementation of the method are presented both in terms of accuracy and efficiency

Dario Maio; Davide Maltoni

1998-01-01

130

An improved method of estimating ionisation density using TLDs  

Microsoft Academic Search

A new method is proposed to determine the ‘effective’ linear energy transfer (LET) in mixed radiation fields, by analysing the radiation density dependence of the area of peak 8 in the thermoluminescence glow-curves of MTS-7 (LiF:Mg,Ti7) detectors. The dependence of the peak 8 area to the peak 5 area on the ‘effective’ LET allows the estimation of the LET to

M. Puchalska; P. Bilski

2008-01-01

131

About the Asymptotic Accuracy of Barron Density Estimates  

Microsoft Academic Search

By extending the information-theoretic arguments of previous papers dealing with the Barron-type density estimates, and their consistency in information divergence and chi-square divergence, the problem of consistency in Csiszar's ?-divergence is motivated for general convex functions ?. The problem of consistency in ?-divergence is solved for all ? with ?(0)

Alain Berlinet; Igor Vajda; Edward C. Van Der Meulen

1998-01-01

132

Estimating the nucleus density of Comet 19P\\/Borrelly  

Microsoft Academic Search

The nucleus bulk density of Comet 19P\\/Borrelly has been estimated by modeling the sublimation-induced non-gravitational force acting upon the orbital motion, thereby reproducing the empirical perihelion advance (i.e., the shortening of the orbital period). The nucleus has been modeled as a prolate ellipsoid, covered by various surface activity maps which reproduce the observed water production rate. The theoretical water production

Björn J. R. Davidsson; Pedro J. Gutiérrez

2004-01-01

133

A Brief Survey of Bandwidth Selection for Density Estimation  

Microsoft Academic Search

There has been major progress in recent years in data-based bandwidth selection for kernel density estimation. Some “second generation” methods, including plug-in and smoothed bootstrap techniques, have been developed that are far superior to well-known “first generation” methods, such as rules of thumb, least squares cross-validation, and biased cross-validation. We recommend a “solve-the-equation” plug-in bandwidth selector as being most reliable

M. C. Jones; J. S. Marron; S. J. Sheather

1996-01-01

134

Diagnosing osteoporosis: A new perspective on estimating bone density  

NASA Astrophysics Data System (ADS)

Osteoporosis may be characterized by low bone density and its significance is expected to grow as the population of the world both increases and ages. Our purpose here is to model human bone mineral density estimated through dual-energy x-ray absorptiometry, using local volumetric distance spline interpolants. Interpolating the values means the construction of a function F(x,y,z) that mimics the relationship implied by the data (x,y,z;f), in such a way that F(x,y,z)=f, i=1,2,…,n, where x,y and z represent, respectively, age, weight and height. This strategy greatly enhances the ability to accurately express the patient's bone density measurements, with the potential to become a framework for bone densitometry in clinical practice. The usefulness of our model is demonstrated in 424 patients and the relevance of our results for diagnosing osteoporosis is discussed.

Cassia-Moura, R.; Ramos, A. D.; Sousa, C. S.; Nascimento, T. A. S.; Valença, M. M.; Coelho, L. C. B. B.; Melo, S. B.

2007-07-01

135

A contact algorithm for density-based load estimation.  

PubMed

An algorithm, which includes contact interactions within a joint, has been developed to estimate the dominant loading patterns in joints based on the density distribution of bone. The algorithm is applied to the proximal femur of a chimpanzee, gorilla and grizzly bear and is compared to the results obtained in a companion paper that uses a non-contact (linear) version of the density-based load estimation method. Results from the contact algorithm are consistent with those from the linear method. While the contact algorithm is substantially more complex than the linear method, it has some added benefits. First, since contact between the two interacting surfaces is incorporated into the load estimation method, the pressure distributions selected by the method are more likely indicative of those found in vivo. Thus, the pressure distributions predicted by the algorithm are more consistent with the in vivo loads that were responsible for producing the given distribution of bone density. Additionally, the relative positions of the interacting bones are known for each pressure distribution selected by the algorithm. This should allow the pressure distributions to be related to specific types of activities. The ultimate goal is to develop a technique that can predict dominant joint loading patterns and relate these loading patterns to specific types of locomotion and/or activities. PMID:16439233

Bona, Max A; Martin, Larry D; Fischer, Kenneth J

2006-01-01

136

Semiautomatic estimation of breast density with DM-Scan software.  

PubMed

OBJECTIVE: To evaluate the reproducibility of the calculation of breast density with DM-Scan software, which is based on the semiautomatic segmentation of fibroglandular tissue, and to compare it with the reproducibility of estimation by visual inspection. MATERIAL AND METHODS: The study included 655 direct digital mammograms acquired using craniocaudal projections. Three experienced radiologists analyzed the density of the mammograms using DM-Scan, and the inter- and intra-observer agreement between pairs of radiologists for the Boyd and BI-RADS(®) scales were calculated using the intraclass correlation coefficient. The Kappa index was used to compare the inter- and intra-observer agreements with those obtained previously for visual inspection in the same set of images. RESULTS: For visual inspection, the mean interobserver agreement was 0,876 (95% CI: 0,873-0,879) on the Boyd scale and 0,823 (95% CI: 0,818-0,829) on the BI-RADS(®) scale. The mean intraobserver agreement was 0,813 (95% CI: 0,796-0,829) on the Boyd scale and 0,770 (95% CI: 0,742-0,797) on the BI-RADS(®) scale. For DM-Scan, the mean inter- and intra-observer agreement was 0,92, considerably higher than the agreement for visual inspection. CONCLUSION: The semiautomatic calculation of breast density using DM-Scan software is more reliable and reproducible than visual estimation and reduces the subjectivity and variability in determining breast density. PMID:23489767

Martínez Gómez, I; Casals El Busto, M; Antón Guirao, J; Ruiz Perales, F; Llobet Azpitarte, R

2013-03-01

137

Wavelet based approaches for efficient compression of complex SAR image data  

Microsoft Academic Search

New wavelet based approaches for efficient data compression of complex SAR images with high reconstruction quality are presented. These approaches utilize either a polar format representation to compress magnitude and phase information of the complex SAR images, separately by different compression schemes, or use a Fourier transform scheme to convert the complex image data format to a real data format

M. Brandfass; W. Coster; U. Benz; A. Moreira

1997-01-01

138

Rotary kiln flame image segmentation based on FCM and gabor wavelet based texture coarseness  

Microsoft Academic Search

Presents an improved segmentation algorithm for flame image of rotary kiln burning zone, based on Gabor wavelet based texture coarseness and Fuzzy C-MEANS (FCM) cluster algorithm. At first, analyses the flame image in detail, divides it into four areas (flame area, material area, illuminated area, background area) by expert experiences and applies threshold segmentation in order to get rid of

Peng Sun; Tianyou Chaia; Xiao-jie Zhou

2008-01-01

139

Revisiting multifractality of high-resolution temporal rainfall using a wavelet-based formalism  

Microsoft Academic Search

We reexamine the scaling structure of temporal rainfall using wavelet-based methodologies which, as we demonstrate, offer important advantages compared to the more traditional multifractal approaches such as box counting and structure function techniques. In particular, we explore two methods based on the Continuous Wavelet Transform (CWT) and the Wavelet Transform Modulus Maxima (WTMM): the partition function method and the newer

V. Venugopal; Stéphane G. Roux; Efi Foufoula-Georgiou; Alain Arneodo

2006-01-01

140

Stable Path Tracking Control of a Mobile Robot Using a Wavelet Based Fuzzy Neural Network  

Microsoft Academic Search

In this paper, we propose a wavelet based fuzzy neural network (WFNN) based direct adaptive control scheme for the solution of the tracking problem of mobile robots. To design a controller, we present a WFNN structure that merges the advantages of the neural network, fuzzy model and wavelet transform. The basic idea of our WFNN structure is to realize the

Joon Seop Oh; Jin Bae Park; Yoon Ho Choi

2005-01-01

141

Multiresolution analysis on zero-dimensional Abelian groups and wavelets bases  

SciTech Connect

For a locally compact zero-dimensional group (G,+{sup .}), we build a multiresolution analysis and put forward an algorithm for constructing orthogonal wavelet bases. A special case is indicated when a wavelet basis is generated from a single function through contractions, translations and exponentiations. Bibliography: 19 titles.

Lukomskii, Sergei F [Saratov State University, Saratov (Russian Federation)

2010-06-29

142

Matrix-free application of Hamiltonian operators in Coifman wavelet bases  

Microsoft Academic Search

A means of evaluating the action of Hamiltonian operators on functions expanded in orthogonal compact support wavelet bases is developed, avoiding the direct construction and storage of operator matrices that complicate extension to coupled multidimensional quantum applications. Application of a potential energy operator is accomplished by simple multiplication of the two sets of expansion coefficients without any convolution. The errors

Ramiro Acevedo; Richard Lombardini; Bruce R. Johnson

2010-01-01

143

Wavelet-based denoising using subband dependent threshold for ECG signals  

Microsoft Academic Search

This paper employs a wavelet-based denoising technique for the recovery of signal contaminated by white additive Gaussian noise and investigates the noise free reconstruction property of universal threshold. A new thresholding procedure is proposed, called subband adaptive. The parameters of this procedure are chosen by difference in mean method. Simulations are carried out in MATLAB using various ECG signals. The

S. Poornachandra

2008-01-01

144

Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency  

Microsoft Academic Search

Most simple nonlinear thresholding rules for wavelet-based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. We only consider the dependencies between the coefficients and their parents in detail. For this purpose, new non-Gaussian bivariate distributions are proposed, and corresponding nonlinear threshold functions (shrinkage functions) are derived from the models using Bayesian

Levent Sendur; Ivan W. Selesnick

2002-01-01

145

Interpretation and improvement of an iterative wavelet-based denoising method  

Microsoft Academic Search

The goal of this paper is to shed new light on a wavelet-based denoising method developed by Hadjileontiadis et al. (1997, 2000) which is derived from an iterative denoising algorithm by Coifman and Wickerhauser (1995, 1998). The underlying algorithm is revisited and interpreted as a fixed-point algorithm. This allows us to derive a new version of the algorithm largely increasing

R. Ranta; C. Heinrich; Valérie Louis-Dorr; D. Wolf

2003-01-01

146

A wavelet-based technique for discrimination between faults and magnetising inrush currents in transformers  

Microsoft Academic Search

Summary form only given, as follows. This paper presents the development of a wavelet-based scheme, for distinguishing between transformer inrush currents and power system fault currents, which proved to provide a reliable, fast and computationally efficient tool. The operating time of the scheme is less than half power frequency cycle (based on 5 kHz sampling rate). In this work, a

O. A. S. Youssef

2002-01-01

147

A Wavelet-Based Technique for Discrimination between Faults and Magnetising Inrush Currents in Transformers  

Microsoft Academic Search

This paper presents the development of a wavelet-based scheme for distinguishing between transformer inrush currents and power system fault currents, which proved to provide a reliable, fast, and computationally efficient tool. The operating time of the scheme is less than half power frequency cycle (based on a 5 kHz sampling rate). In this work, the wavelet transform concept is presented.

O. A. S. Youssef

2002-01-01

148

Improved early stroke detection: Wavelet-based perception enhancement of computerized tomography exams  

Microsoft Academic Search

Nonenhanced computerized tomography (CT) exams were used to detect acute stroke by notification of hypodense area. Infarction perception improvement by data denoising and local contrast enhancement in multi-scale domain was proposed. The wavelet-based image processing method enhanced the subtlest signs of hypodensity, which were often invisible in standard CT scan review. Thus improved detection efficiency of perceptual ischemic changes was

A. Przelaskowski; K. Sklinda; P. Bargie?; J. Walecki; M. Biesiadko-Matuszewska; M. Kazubek

2007-01-01

149

Wavelet-based method for calculating elastic band gaps of two-dimensional phononic crystals  

Microsoft Academic Search

A wavelet-based method is developed to calculate elastic band gaps of two-dimensional phononic crystals. The wave field is expanded in the wavelet basis and an equivalent eigenvalue problem is derived in a matrix form involving the adaptive computation of integrals of the wavelets. The method is applied to a binary system. We first compute the band gaps of Au cylinders

Zhi-Zhong Yan; Yue-Sheng Wang

2006-01-01

150

High-quality low-complexity wavelet-based compression algorithm for audio signals  

Microsoft Academic Search

Wavelets have recently emerged as a powerful tool for signal compression, particularly in the areas of image, video, and audio compression. In this paper, we present a low-complexity wavelet-based audio compression algorithm that is capable of handling fairly arbitrary audio sources. The algorithm transforms the incoming audio data into the wavelet domain, and compresses data by exploring redundancy in the

M. Abo-Zahhad; A. Al-Smadi; S. M. Ahmed

2004-01-01

151

Wavelet-based method to disentangle transcription- and replication-associated strand asymmetries in mammalian genomes  

Microsoft Academic Search

During genome evolution, the two strands of the DNA double helix are not subjected to the same mutation patterns. This mutation bias is considered as a by-product of replicative and transcriptional activities. In this paper, we develop a wavelet-based methodology to analyze the DNA strand asymmetry profiles with the specific goal to extract the contributions associated with replication and transcription

Antoine Baker; Samuel Nicolay; Lamia Zaghloul; Yves d'Aubenton-Carafa; Claude Thermes; Benjamin Audit; Alain Arneodo

2010-01-01

152

A wavelet based de-noising technique for ocular artifact correction of the electroencephalogram  

Microsoft Academic Search

This paper investigates a wavelet based denoising of the electroencephalogram (EEG) signal to correct for the presence of the ocular artifact (OA). The. proposed technique is based on an over-complete wavelet expansion of the EEG as follows: i) a stationary wavelet transform (SWT) is applied to the corrupted EEG; ii) the thresholding of the coefficients in the lower frequency bands

Tatjana Zikov; Stkphane Bibian; G. A. Dumont; Mihai Huzmezan; Craig R. Ries

2002-01-01

153

Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics.  

PubMed

We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D denoising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional denoising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the denoised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of denoised wavelet coefficients for each voxel. Given the de-correlated nature of these denoised wavelet coefficients, it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules: First, in the analysis module we combine a new 3-D wavelet denoising approach with signal separation properties of ICA in the wavelet domain. This step helps obtain an activation component that corresponds closely to the true underlying signal, which is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing+spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positive voxels. PMID:21034833

Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A; Calhoun, Vince D

2010-10-26

154

Wavelet-based denoising with nearly arbitrarily shaped windows  

Microsoft Academic Search

The estimation of the signal variance in a noisy environment is a critical issue in denoising. The signal variance is simply but effectively obtained by the locally adaptive window-based maximum likelihood or the maximum a posteriori estimate. The size of the locally adaptive window is also an important factor in estimating the signal variance. In this letter, we propose a

Il Kyu Eom; Yoo Shin Kim

2004-01-01

155

Parametric estimation of the cross-power spectral density  

NASA Astrophysics Data System (ADS)

A new cross-spectral analysis procedure is proposed for the parametric estimation of the relationship between two time sequences in the frequency domain. In this method, the two observable outputs are modeled as a pair of autoregressive moving-average and moving-average (ARMAMA) models under the assumption that the two outputs are driven by a common input and independent ones simultaneously. Cross- and auto-power spectral densities (PSDs) of a pair of ARMAMA models can be derived as forms of rational polynomial functions. The coefficients of these functions can be estimated from the cross-correlation function or the auto-correlation functions of the two observed sequences by using the method presented in this paper. The main advantage of the present procedure is that the physical parameters of an unknown system can be easily estimated from the coefficients of the cross- and auto-PSD functions. To illustrate the effectiveness of the proposed procedure, numerical and practical examples of a mechanical vibration problem are analyzed. The results show that the proposed procedure gives accurate cross- and auto-PSD estimates. Moreover, the physical properties of the unknown system can be well estimated from the obtained cross- and auto-PSDs.

Kanazawa, Kenji; Hirata, Kazuta

2005-04-01

156

A projection and density estimation method for knowledge discovery.  

PubMed

A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

Stanski, Adam; Hellwich, Olaf

2012-10-01

157

An Efficient Adaptive Thresholding Technique for Wavelet Based Image Denoising  

Microsoft Academic Search

This frame work describes a computationally more efficient and adaptive threshold estimation method for image denoising in the wavelet domain based on Generalized Gaussian Distribution (GGD) modeling of subband coefficients. In this proposed method, the choice of the threshold estimation is carried out by analysing the statistical parameters of the wavelet subband coefficients like standard deviation, arithmetic mean and geometrical

D. Gnanadurai; V. Sadasivam

2006-01-01

158

Effect of Random Clustering on Surface Damage Density Estimates  

SciTech Connect

Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

Matthews, M J; Feit, M D

2007-10-29

159

Dust-cloud density estimation using a single wavelength lidar  

NASA Astrophysics Data System (ADS)

The passage of commercial and military aircraft through invisible fresh volcanic ash clouds has caused damage to many airplanes. On December 15, 1989 all four engines of a KLM Boeing 747 were temporarily extinguished in a flight over Alaska resulting in $DOL80 million for repair. Similar aircraft damage to control systems, FLIR/EO windows, wind screens, radomes, aircraft leading edges, and aircraft data systems were reported in Operation Desert Storm during combat flights through high-explosive and naturally occurring desert dusts. The Defense Nuclear Agency is currently developing a compact and rugged lidar under the Aircraft Sensors Program to detect and estimate the mass density of nuclear-explosion produced dust clouds, high-explosive produced dust clouds, and fresh volcanic dust clouds at horizontal distances of up to 40 km from an aircraft. Given this mass density information, the pilot has an option of avoiding or flying through the upcoming cloud.

Youmans, Douglas G.; Garner, Richard; Petersen, Kent R.

1994-09-01

160

An Automated Approach for Estimation of Breast Density  

PubMed Central

Breast density is a strong risk factor for breast cancer; however, no standard assessment method exists. An automated breast density method (ABDM) was modified and compared with a semi-automated user-assisted display method (CM) and the Breast Imaging Reporting and Data System (BI-RADS) four-category tissue composition measure for their ability to predict future breast cancer risk. The three estimation methods were evaluated in a matched breast cancer case (n=372) control (n=713) study at the Mayo Clinic using digitized film mammograms. Mammograms from the craniocaudal (CC) view of the noncancerous breast were acquired on average seven years before diagnosis. Two controls with no prior history of breast cancer from the screening practice were matched to each case on age, number of prior screening mammograms, final screening exam date, menopausal status at this date, interval between earliest and latest available mammograms, and residence. Both Pearson linear correlation (R) and Spearman rank correlation ( r ) coefficients were used for comparing the three methods where appropriate. Conditional logistic regression was used to estimate the risk of breast cancer (odds ratios [ORs] and 95% confidence intervals [CIs]) associated with the quartiles of percent density (ABDM, CM) or BI-RADS category. The area under the receiver operator characteristic curve (AUC) was estimated and used to compare the discriminatory capabilities of each approach. The continuous measures ABDM and CM were highly correlated with each other (R=0.70) but less with BI-RADS (r=0.49 for ABDM and r=0.57 for CM). Risk estimates associated with the lowest to highest quartiles of ABDM were greater in magnitude (ORs: 1.0[ref], 2.3, 3.0, 5.2, p-trend<0.001) than the corresponding quartiles for CM (ORs: 1.0[ref], 1.7, 2.1 and 3.8; p-trend<0.001) and BI-RADS (ORs: 1.0[ref], 1.6, 1.5, 2.6; p-trend<0.001) methods. However, all methods similarly discriminated between case and control status: AUCs were 0.64, 0.63 and 0.61 for ABDM, CM and BI-RADS, respectively. The ABDM is a viable option for quantitatively assessing breast density from digitized film mammograms.

Heine, John J.; Carston, Michael J.; Scott, Christopher G.; Brandt, Kathleen R.; Wu, Fang-Fang; Pankratz, V. Shane; Sellers, Thomas A.; Vachon, Celine M.

2009-01-01

161

On the Risk of Histograms for Estimating Decreasing Densities  

Microsoft Academic Search

Suppose we want to estimate an element $f$ of the space $\\\\Theta$ of all decreasing densities on the interval $\\\\lbrack a; a + L \\\\rbrack$ satisfying $f(a^+) \\\\leq H$ from $n$ independent observations. We prove that a suitable histogram $\\\\hat{f}_n$ with unequal bin widths will achieve the following risk: $\\\\sup_{f \\\\in \\\\Theta} \\\\mathbb{E}_f \\\\big\\\\lbrack \\\\int|\\\\hat{f}_n(x) - f(x)|dx \\\\big\\\\rbrack \\\\leq 1.89(S\\/n)^{1\\/3}

Lucien Birge

1987-01-01

162

A Nonparametric Estimate of a Multivariate Density Function  

Microsoft Academic Search

Let $x_1, \\\\cdots, x_n$ be independent observations on a $p$-dimensional random variable $X = (X_1, \\\\cdots, X_p)$ with absolutely continuous distribution function $F(x_1, \\\\cdots, x_p)$. An observation $x_i$ on $X$ is $x_i = (x_{1i}, \\\\cdots, x_{pi})$. The problem considered here is the estimation of the probability density function $f(x_1, \\\\cdots, x_p)$ at a point $z = (z_1, \\\\cdots, z_p)$ where

D. O. Loftsgaarden; C. P. Quesenberry

1965-01-01

163

Hierarchical Multiscale Adaptive Variable Fidelity Wavelet-based Turbulence Modeling with Lagrangian Spatially Variable Thresholding  

NASA Astrophysics Data System (ADS)

The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work. The scalability and speedup studies of this powerful parallel PDE solver are performed on various architectures. Furthermore, Reynolds scaling of active spatial modes of both CVS and SCALES of linearly forced homogeneous turbulence at high Reynolds numbers is investigated for the first time. This computational complexity study, by demonstrating very promising slope for Reynolds scaling of SCALES even at constant level of fidelity for SGS dissipation, proves the argument that SCALES as a dynamically adaptive turbulence modeling technique, can offer a plethora of flexibilities in hierarchical multiscale space/time adaptive variable fidelity simulations of high Reynolds number turbulent flows.

Nejadmalayeri, Alireza

164

Wavelet-based image denoising using generalized cross validation  

NASA Astrophysics Data System (ADS)

De-noising algorithms based on wavelet thresholding replace small wavelet coefficients by zero and keep or shrink the coefficients with absolute value above the threshold. The optimal threshold minimizes the error of the result as compared to the unknown, exact data. To estimate this optimal threshold, we use generalized cross validation. This procedure does not require an estimation for the noise energy. Originally, this method assumes uncorrelated noise, and an orthogonal wavelet transform. In this paper we investigate the possibilities of this method for less restrictive conditions.

Jansen, Maarten; Bultheel, Adhemar

1997-04-01

165

A WAVELET-BASED PITCH DETECTOR FOR MUSICAL SIGNALS  

Microsoft Academic Search

Physical modelling of musical instruments is one possible approach to digital sound synthesis techniques. By the term physical modelling, we refer to the simulation of sound production mechanism of a musical instrument, which is modelled with reference to the physics using wave-guides. One of the fundamental parameters of such a physical model is the pitch, and so pitch period estimation

John Fitch; Wafaa Shabana

1999-01-01

166

An improved wavelet-based speech enhancement system  

Microsoft Academic Search

The problem of speech enhancement using wavelet thresholding algorithm is considered. Major problems in applying the basic algorithm are discussed and modifications are proposed to improve the method. First, we propose the use of different thresholds for different wavelet bands. Next, by employing a pause detection algorithm, noise profile is estimated and the thresholds are adapted. This enables the modified

Hamid Sheikhzadeh; Hamid Reza Abutalebi

2001-01-01

167

WAVELET BASED REAL-TIME SMOKE DETECTION IN VIDEO  

Microsoft Academic Search

A method for smoke detection in video is proposed. It is as- sumed the camera monitoring the scene is stationary. Since the smoke is semi-transparent, edges of image frames start loosingtheirsharpnessandthisleadsto adecreaseinthehigh frequency content of the image. To determine the smoke in thefield ofviewofthecamera, thebackgroundof thesceneis estimated and decrease of high frequencyenergyof the scene is monitored using the spatial wavelet

A. Enis Cetin

2005-01-01

168

Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy.  

PubMed

We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891

Chang, Ching-Wei; Mycek, Mary-Ann

2012-03-13

169

Adaptive mammographic image feature enhancement using wavelet-based multiresolution analysis  

NASA Astrophysics Data System (ADS)

This paper presents a novel and computationally efficient approach to an adaptive mammographic image feature enhancement using wavelet-based multiresolution analysis. Upon wavelet decomposition applied to a given mammographic image, we integrate the information of the tree-structured zerocrossings of wavelet coefficients and the information of the low-pass filtered subimage to enhance the desired image features. A discrete wavelet transform with pyramidal structure has been employed to speed up the computation for wavelet decomposition and reconstruction. The spatio-frequency localization property of the wavelet transform is exploited based on the spatial coherence of image and the principle of human psychovisual mechanism. Preliminary results show that the proposed approach is able to adaptively enhance local edge features, suppress noise, and improve global visualization of mammographic image features. This wavelet-based multiresolution analysis is therefore promising for computerized mass screening of mammograms.

Chen, Lulin; Chen, Chang W.; Parker, Kevin J.

1996-03-01

170

New temporal filtering scheme to reduce delay in wavelet-based video coding.  

PubMed

Scalability is an important desirable property of video codecs. Wavelet-based motion-compensated temporal filtering provides the most powerful scheme for scalable video coding and provides high-compression efficiency that competes with the current state of art codecs. However, the delay introduced by the temporal filtering schemes is sometimes very high, which makes them unsuitable for many real-time applications. In this paper, ue propose a new temporal filter set to minimize delay in 3-D wavelet-based video coding. The new filter set gives a performance at par with existing longer filters. The length of the filter can vary from two to any number of frames depending on delay requirements. If the frames are processed as separate groups of frames (GOFs), the proposed filter set will not have any boundary effects at the GOF. Experimental results are presented and conclusions are drawn. PMID:18092592

Seran, Vidhya; Kondi, Lisimachos P

2007-12-01

171

Research of the wavelet based ECW remote sensing image compression technology  

NASA Astrophysics Data System (ADS)

This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

2007-11-01

172

An Integrated Course on Wavelet-Based Image Compression - Learning Abstract Information Theory on Visual Data  

Microsoft Academic Search

We describe the implementation of and our experiences with a capstone course on wavelet based image compression held at the\\u000a Berlin University of Technology in the years 2002 to 2006. This course has been designed as an “integrated project”, which\\u000a means that it combines lectures, seminar talks to be prepared and held by the students, and a programming part. The

Sven Grottke; Sabina Jeschke; Thomas Richter

2008-01-01

173

WAVELET-BASED ANALYSIS OF THE NON-STATIONARY RESPONSE OF A SLIPPING FOUNDATION  

Microsoft Academic Search

A wavelet-based stochastic formulation has been presented in this paper for the seismic analysis of a rigid block resting on a friction base. The ground motion has been modelled as a non-stationary process (both in amplitude and frequency) by using wavelets. The proposed formulation is based on replacing the non-linear system by an equivalent linear system with time-dependent properties. The

B. Basu; V. K. Gupta

1999-01-01

174

Wavelet-based image restoration for compact X-ray microscopy  

Microsoft Academic Search

Summary Compact water-window X-ray microscopy with short expo- sure times will always be limited on photons owing to sources of limited power in combination with low-efficency X-ray optics. Thus, it is important to investigate methods for improv- ing the signal-to-noise ratio in the images. We show that a wavelet-based denoising procedure significantly improves the quality and contrast in compact X-ray

H. Stollberg; J. Boutet De Monvel; A. Holmberg; H. M. Hertz

2003-01-01

175

Wavelet-based unbalanced un-equivalent multiple description coding for P2P networks  

Microsoft Academic Search

Almost all existing multi-description coding (MDC) schemes are designed for media streaming over Internet. In this work, a wavelet-based video MDC technique is introduced that fits the criteria of media streaming over peer-to-peer networks. Our proposed method assigns descriptions to the senders due to their characteristic (i.e. bandwidth and availability). In contrast to traditional MDC, different descriptions in the proposed

Mohammad Hamed Firooz; Keivan Ronasi; Mohammad Reza Pakravan; Alireza Nasiri Avanaki

2007-01-01

176

Implementation of a Wavelet-Based MRPID Controller for Benchmark Thermal System  

Microsoft Academic Search

This paper presents a comparative analysis of the intelligent controllers for temperature control of a benchmark thermal system. The performances of the proposed wavelet-based multiresolution proportional-integral derivative (PID) (MRPID) controller, which can also be stated as a multiresolution wavelet controller, are compared with the conventional PID controller and the adaptive neural-network (NN) controller. In the proposed MRPID temperature controller, the

M. A. S. K. Khan; M. Azizur Rahman

2010-01-01

177

Haar Wavelet-based Robust Optimal Control for Vibration Reduction of Vehicle Engine–body System  

Microsoft Academic Search

This paper deals with the modeling and robust control of bounce and pitch vibration for the engine–body vibration structure\\u000a using Haar wavelets. The authors’ attention is focused on the development of the Haar wavelet-based robust optimal control\\u000a for vibration reduction of the engine–body system computationally that guarantees desired L\\u000a 2 gain performance. The properties of Haar wavelet are introduced and

H. R. Karimi; B. Lohmann

2007-01-01

178

Optimal zonal wavelet-based ECG data compression for a mobile telecardiology system  

Microsoft Academic Search

A new integrated design approach for an optimal zonal wavelet-based ECG data compression (OZWC) method for a mobile telecardiology model is presented. The hybrid implementation issues of this wavelet method with a GSM-based mobile telecardiology system are also introduced. The performance of the mobile system with compressed ECG data segments selected from the MIT-BIH arrhythmia database is evaluated in terms

Robert S. H. Istepanian; Arthur A. Petrosian

2000-01-01

179

A comparative evaluation of wavelet-based methods for hypothesis testing of brain activation maps  

Microsoft Academic Search

Wavelet-based methods for hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasise convergence between methods of wavelet thresholding or shrinkage and the problem of hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on the trade-off between type I

M. J. Fadili; E. T. Bullmore

180

Reinforcement Hybrid Evolutionary Learning for Recurrent Wavelet-Based Neurofuzzy Systems  

Microsoft Academic Search

This paper proposes a recurrent wavelet-based neurofuzzy system (RWNFS) with the reinforcement hybrid evolutionary learning algorithm (R-HELA) for solving various control problems. The proposed R-HELA combines the compact genetic algorithm (CGA), and the modified variable-length genetic algorithm (MVGA) performs the structure\\/parameter learning for dynamically constructing the RWNFS. That is, both the number of rules and the adjustment of parameters in

Cheng-Jian Lin; Yung-Chi Hsu

2007-01-01

181

Efficient Reinforcement Hybrid Evolutionary Learning for Recurrent Wavelet-Based Neuro-fuzzy Systems  

Microsoft Academic Search

This paper proposes a recurrent wavelet-based neuro-fuzzy system (RWNFS) with the reinforcement hybrid evolutionary learning\\u000a algorithm (R-HELA) for solving various control problems. The proposed R-HELA combines the compact genetic algorithm (CGA)\\u000a and the modified variable-length genetic algorithm (MVGA), performs the structure\\/ parameter learning for dynamically constructing\\u000a the RWNFS. That is, both the number of rules and the adjustment of parameters

Cheng-hung Chen; Cheng-jian Lin; Chi-yung Lee

2007-01-01

182

Iterative Regularization and Nonlinear Inverse Scale Space Applied to Wavelet-Based Denoising  

Microsoft Academic Search

In this paper, we generalize the iterative regularization method and the inverse scale space method, recently developed for total-variation (TV) based image restoration, to wavelet-based image restoration. This continues our earlier joint work with others where we applied these techniques to variational-based image restoration, obtaining significant improvement over the Rudin-Osher-Fatemi TV-based restoration. Here, we apply these techniques to soft shrinkage

Jinjun Xu; Stanley Osher

2007-01-01

183

A wavelet-based technique for discrimination between faults and magnetizing inrush currents in transformers  

Microsoft Academic Search

This paper presents the development of a wavelet-based scheme, for distinguishing between transformer inrush currents and power system fault currents, which proved to provide a reliable, fast, and computationally efficient tool. The operating time of the scheme is less than half the power frequency cycle (based on a 5-kHz sampling rate). In this work, a wavelet transform concept is presented.

Omar A. S. Youssef

2003-01-01

184

A Wavelet-based Neural Network Model to Predict Ambient Air Pollutants’ Concentration  

Microsoft Academic Search

The present paper proposes a wavelet based recurrent neural network model to forecast one step ahead hourly, daily mean and\\u000a daily maximum concentrations of ambient CO, NO2, NO, O3, SO2 and PM2.5 — the most prevalent air pollutants in urban atmosphere. The time series of each air pollutant has been decomposed into different\\u000a time-scale components using maximum overlap wavelet transform

Amit Prakash; Ujjwal Kumar; Krishan Kumar; V. K. Jain

185

A wavelet-based approximation of surface coil sensitivity profiles for correction of image intensity inhomogeneity and parallel imaging reconstruction.  

PubMed

We evaluate a wavelet-based algorithm to estimate the coil sensitivity modulation from surface coils. This information is used to improve the image homogeneity of magnetic resonance imaging when a surface coil is used for reception, and to increase image encoding speed by reconstructing images from under-sampled (aliased) acquisitions using parallel magnetic resonance imaging (MRI) methods for higher spatiotemporal image resolutions. The proposed algorithm estimates the spatial sensitivity profile of surface coils from the original anatomical images directly without using the body coil for additional reference scans or using coil position markers for electromagnetic model-based calculations. No prior knowledge about the anatomy is required for the application of the algorithm. The estimation of the coil sensitivity profile based on the wavelet transform of the original image data was found to provide a robust method for removing the slowly varying spatial sensitivity pattern of the surface coil image and recovering full FOV images from two-fold acceleration in 8-channel parallel MRI. The results, using bi-orthogonal Daubechies 97 wavelets and other members in this family, are evaluated for T1-weighted and T2-weighted brain imaging. PMID:12768534

Lin, Fa-Hsuan; Chen, Ying-Jui; Belliveau, John W; Wald, Lawrence L

2003-06-01

186

Wavelet-based image denoising using variance field diffusion  

NASA Astrophysics Data System (ADS)

Wavelet shrinkage is an image restoration technique based on the concept of thresholding the wavelet coefficients. The key challenge of wavelet shrinkage is to find an appropriate threshold value, which is typically controlled by the signal variance. To tackle this challenge, a new image restoration approach is proposed in this paper by using a variance field diffusion, which can provide more accurate variance estimation. Experimental results are provided to demonstrate the superior performance of the proposed approach.

Liu, Zhenyu; Tian, Jing; Chen, Li; Wang, Yongtao

2012-04-01

187

Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging  

SciTech Connect

The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB implementations to compute the decision threshold r{sub 0}*that will provide the appropriate desired Probability of False Alarm P{sub FA} for the matched filter. The goal is to use prior knowledge of the background data to generate an estimate of the probability density function (pdf) [13] of the matched filter threshold r for the case in which the data measurement contains only background data (we call this case the null hypothesis, or H{sub 0}) [10, 11]. We call the pdf estimate {cflx f}(r|H{sub 0}). In this report, we use histograms and Parzen pdf estimators [14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Once the estimate is obtained, it can be integrated to compute an estimate of the P{sub FA}as a function of the matched filter detection threshold r. We can then interpolate r vs. P{sub FA} to obtain a curve that gives the threshold r{sub 0}* that will provide the appropriate desired Probability of False Alarm P{sub FA}for the matched filter. Processing results have been computed using both simulated and real LASI data sets. The algorithms and codes have been validated, and the results using LASI data are presented here. Future work includes applying the pdf estimation and CFAR threshold calculation algorithms to the LASI matched filter based upon global background statistics, and developing a new adaptive matched filter algorithm based upon local background statistics. Another goal is to implement the 4-Gamma pdf modeling method proposed by Stocker et. al. [4] and comparing results using histograms and the Parzen pdf estimators.

Clark, G A

2004-09-21

188

Interpolating Spline Methods for Density Estimation I. Equi-Spaced Knots.  

National Technical Information Service (NTIS)

Statistical properties of the histospline density estimate of Boneva-Kendall-Stefanov-Schoenberg are found. This density estimate is the derivative of a cubic spline of interpolation to the sample cumulative distribution at equally spaced points. The spac...

G. Wahba

1973-01-01

189

The effect of image enhancement on the statistical analysis of functional neuroimages: wavelet-based denoising and Gaussian smoothing  

Microsoft Academic Search

The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising schemes are extensions of WaveLab routines, using the symmetric orthogonal cubic spline wavelet basis. In a first study, activity in a time series

Alle Meije Wink; Jos B. T. M. Roerdink

2003-01-01

190

The Eect of Image Enhancement on the Statistical Analysis of Functional Neuroimages: Wavelet-Based Denoising and Gaussian Smoothing  

Microsoft Academic Search

The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical para- metric mapping (SPM). The wavelet-based denoising schemes are extensions of WaveLab routines, using the symmetric orthogonal cubic spline wavelet basis. In a first study, activity in a time

Alle Meije Wink; Jos B. T. M. Roerdink

191

Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling  

NASA Astrophysics Data System (ADS)

Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

Rastigejev, Y.

2011-12-01

192

An Adaptive Background Subtraction Method Based on Kernel Density Estimation  

PubMed Central

In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

Lee, Jeisung; Park, Mignon

2012-01-01

193

The effect of image enhancement on the statistical analysis of functional neuroimages: wavelet-based denoising and Gaussian smoothing  

NASA Astrophysics Data System (ADS)

The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising schemes are extensions of WaveLab routines, using the symmetric orthogonal cubic spline wavelet basis. In a first study, activity in a time series is simulated by superimposing a timedependent signal on a selected region. We add noise with a known signal-to-noise ratio (SNR) and spatial correlation. After denoising, the statistical analysis, performed with SPM, is evaluated. We compare the shapes of activations detected after applying the wavelet-based methods with the shapes of activations detected after Gaussian smoothing. In a second study, the denoising schemes are applied to a real functional MRI time series, where signal and noise cannot be separated. The denoised time series are analysed with SPM, while false discovery rate (FDR) control is used to correct for multiple testing. Wavelet-based denoising, combined with FDR control, yields reliable activation maps. While Gaussian smoothing and wavelet-based methods producing smooth images work well with very low SNRs, less smoothing wavelet-based methods produce better results for time series of moderate quality.

Wink, Alle Meije; Roerdink, Jos B. T. M.

2003-05-01

194

Column density estimation: Tree-based method implementation  

NASA Astrophysics Data System (ADS)

The radiative transfer plays a crucial role in several astrophysical processes. In particular for the star formation problem it is well established that stars form in the densest and coolest regions in molecular clouds then understanding the interstellar cycle becomes crucial. The physics of dense gas requires the knowledge of the UV radiation that regulates the physics and the chemistry within the molecular cloud. The numerical modelization needs the calculation of column densities in any direction for each resolution element. In numerical simulations the cost of solving the radiative transfer problem is of the order of N^5/3, where N is the number of resolution elements. The exact calculation is in general extremely expensive in terms of CPU time for relatively large simulations and impractical in parallel computing. We present our tree-based method for estimating column densities and the attenuation factor for the UV field. The method is inspired by the fact that any distant cell subtends a small angle and therefore its contribution to the screening will be diluted. This method is suitable for parallel computing and no communication is needed between different CPUs. It has been implemented into the RAMSES code, a grid-based solver with adaptive mesh refinement (AMR). We present the results of two tests and a discussion on the accuracy and the performance of this method. We show that the UV screening affects mainly the dense parts of molecular clouds, changing locally the Jeans mass and therefore affecting the fragmentation.

Valdivia, Valeska

2013-07-01

195

Response kernel density estimation Monte Carlo method for electron transport  

NASA Astrophysics Data System (ADS)

Electron transport simulation plays an important role in the dose calculation in electron cancer therapy as well as in many other fields. Traditional numerical solutions for particle transport are inadequate because of the extremely anisotropic collisions between electrons and the background medium. In principle, analog Monte Carlo (AMC) can be used, however, the large cross section for coulombic interactions makes it of limited use due to the large amount of computer time required for typical simulations. Several techniques, such as the condensed history random walk technique, have been proposed and investigated to improve the efficiency of AMC. However, the approximations used in these techniques either reduce their accuracy, or restrict them to certain applications. The response kernel density estimation Monte Carlo method (RKMC) proposed in this study attempts to improve the efficiency of AMC without sacrificing accuracy. A complete Monte Carlo electron transport calculation is divided into two steps in RKMC. In the first step, or local calculation, a series of AMC simulations are performed to collect electron state data in phase space after the electrons experience multiple scattering. In the second step, or global calculation, the adaptive kernel density estimation method is used to construct the probability density functions (pdf's) from the recorded data set, which are sampled efficiently by a specially designed Monte Carlo sampling scheme. Since the electron multiple scattering pdf's come from the AMC simulations, all the effects of multiple scattering are taken into consideration. Therefore, RKMC is expected to be both accurate and efficient. A RKMC code was developed first to test the method against an AMC code. The test case results showed that the RKMC code was approximately 100 times faster than the AMC code. The method was also incorporated into EGS4, an industry standard electron transport condensed history Monte Carlo (CHMC) code, as a replacement for Moliere's multiple scattering theory (MMST). A clear improvement in both accuracy and efficiency over EGS4 was observed for the low and intermediate energy range (10 keV to 20 MeV) electron transport simulations, because lateral displacements are considered in RKMC and the restrictions on transport step size are eliminated. All the results of our study show that RKMC is a promising method for electron transport simulations.

Du, Jie

196

Learning Multisensory Integration and Coordinate Transformation via Density Estimation  

PubMed Central

Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.

Sabes, Philip N.

2013-01-01

197

Multiscale seismic attributes: a wavelet-based method and its application to high-resolution seismic and ground truth data  

NASA Astrophysics Data System (ADS)

We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of seismic reflected waves. Our approach relies on the analysis of ridge functions which contain most of the information of the wavelet transform in a sparse support. This method falls in the framework of the wavelet response (WR) introduced by Le Gonidec et al. which analyses the impedance multiscale behaviour by propagating dilated wavelets into the medium. We further extend the WR by considering its application to broad-band seismic data. We take into account the bandpass filter effect related to the limited frequency range of the seismic source. We apply the method to a deep-water seismic experiment performed in 2008 during the ERIG3D cruise to demonstrate the potential of ridge functions as multiscale seismic attributes. In conjunction to the analysis of seismic data acquired by the deep-towed SYSIF system (200-2200 Hz), we use ground truth data to characterize the fine scale structure of superficial sediments by using the continuous wavelet transform (CWT). The availability of in situ measurements allows to validate the relationship between CWT and WR and to estimate the attenuation of seismic waves into the sediments. Once validated, the method is applied on a whole seismic profile and WR ridge functions are computed for two particular reflectors. The reflector thicknesses fall below the resolution limit of the seismic experiment making the WR seismic attributes a super-resolution method.

Ker, S.; Le Gonidec, Y.; Gibert, D.; Marsset, B.

2011-11-01

198

ENVIRONMENTAL AUDITING: Demonstration of Line Transect Methodologies to Estimate Urban Gray Squirrel Density  

PubMed

/ Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% CI = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.KEY WORDS: Bias; Density; Distance sampling; Gray squirrel; Line transect; Sciurus carolinensis. PMID:9336490

Hein

1997-11-01

199

Estimation of a Nonlinear Density-Dependence Parameter for Wild Turkey  

Microsoft Academic Search

Although previous research and theory has suggested that wild turkey (Meleagris gallopavo) populations may be subject to some form of density dependence, there has been no effort to estimate and incorporate a density-dependence parameter into wild turkey population models. To estimate a functional relationship for density dependence in wild turkey, we analyzed a set of harvest-index time series from 11

JAY D. McGHEE; JAMES M. BERKSON

2007-01-01

200

Tropical forests and the global carbon cycle: Estimating state and change in biomass density. Book chapter  

Microsoft Academic Search

This chapter discusses estimating the biomass density of forest vegetation. Data from inventories of tropical Asia and America were used to estimate biomass densities. Efforts to quantify forest disturbance suggest that population density, at subnational scales, can be used as a surrogate index to encompass all the anthropogenic activities (logging, slash-and-burn agriculture, grazing) that lead to degradation of tropical forest

1996-01-01

201

Wavelet-based image restoration of compact X-ray microscope images  

NASA Astrophysics Data System (ADS)

Compact X-ray microscopy employing optics, such as mulitlayer mirrors and zone plates, with limited collection angles and efficiencies will a) ways be limited on photons for short exposure times. Thus, it is important to investigate methods for improving the signal-to-noisé ratio in the images. We show on data taken with the Stockholm laser-plasma-based X-ray microscope at 3.3nm that a wavelet-based denoising procédure has potential to reduce the exposure time a factor 2 without loss of image information.

Stollberg, H.; Boutet de Monvel, J.; Johansson, G. A.; Hertz, H. M.

2003-03-01

202

Serial identification of EEG patterns using adaptive wavelet-based analysis  

NASA Astrophysics Data System (ADS)

A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

2013-10-01

203

A novel 3D wavelet based filter for visualizing features in noisy biological data  

SciTech Connect

We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

2005-01-05

204

Wavelet-based neural pattern analyzer for behaviorally significant burst pattern recognition.  

PubMed

Closed-loop neural prosthesis systems rely on accurately recording neural data from multiple neurons and detecting behaviorally meaningful patterns before representing them in a highly compressed form for wireless transmission over a limited-bandwidth link. We present a novel wavelet-based approach for detecting spikes, grouping them as bursts and building a dynamic vocabulary of meaningful burst patterns. Simulation results on pre-recorded in vivo multi-channel extracellular neural data from the buccal ganglion of Aplysia demonstrate the feasibility of behavior recognition as well as data compression (>500X) by the proposed approach. PMID:19162588

Narasimhan, Seetharam; Cullins, Miranda; Chiel, Hillel J; Bhunia, Swarup

2008-01-01

205

SEMI-RECURSIVE KERNEL ESTIMATION OF FUNCTIONS OF DENSITY FUNCTIONALS AND THEIR DERIVATIVES  

Microsoft Academic Search

A class of semi-recursive kernel type estimates of functions depending on multivariate density functionals and their derivatives is considered. The piece- wise smoothed approximations of these estimates are proposed. The convergence with probability one of the estimates is proved. The main parts of the asymptotic mean square errors of the estimates are found. The examples of estimation of the production

A. V. Kitayeva; G. M. Koshkin

206

Three Sides of Smoothing: Categorical Data Smoothing, Nonparametric Regression, and Density Estimation  

Microsoft Academic Search

Summary The past forty years have seen a great deal of research into the construction and properties of nonpara- metric estimates of smooth functions. This research has focused primarily on two sides of the smoothing problem: nonparametric regression and density estimation. Theoretical results for these two situations are similar, and multivariate density estimation was an early justification for the Nadaraya-Watson

Jeffrey S. Simonoff

1998-01-01

207

Functional data: local linear estimation of the conditional density and its application  

Microsoft Academic Search

In this paper, we introduce a new nonparametric estimation procedure of the conditional density of a scalar response variable given a random variable taking values in a semi-metric space. Under some general conditions, we establish both the pointwise and the uniform almost-complete consistencies with convergence rates of the conditional density estimator related to this estimation procedure. Moreover, we give some

Jacques Demongeot; Ali Laksaci; Fethi Madani; Mustapha Rachdi

2011-01-01

208

Demonstration of line transect methodologies to estimate urban gray squirrel density  

SciTech Connect

Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.

Hein, E.W. [Los Alamos National Lab., NM (United States)

1997-11-01

209

Tropical forests and the global carbon cycle: Estimating state and change in biomass density. Book chapter  

SciTech Connect

This chapter discusses estimating the biomass density of forest vegetation. Data from inventories of tropical Asia and America were used to estimate biomass densities. Efforts to quantify forest disturbance suggest that population density, at subnational scales, can be used as a surrogate index to encompass all the anthropogenic activities (logging, slash-and-burn agriculture, grazing) that lead to degradation of tropical forest biomass density.

Brown, S.

1996-07-01

210

Analysis of damped tissue vibrations in time-frequency space: a wavelet-based approach.  

PubMed

There is evidence that vibrations of soft tissue compartments are not appropriately described by a single sinusoidal oscillation for certain types of locomotion such as running or sprinting. This paper discusses a new method to quantify damping of superimposed oscillations using a wavelet-based time-frequency approach. This wavelet-based method was applied to experimental data in order to analyze the decay of the overall power of vibration signals over time. Eight healthy subjects performed sprinting trials on a 30 m runway on a hard surface and a soft surface. Soft tissue vibrations were quantified from the tissue overlaying the muscle belly of the medial gastrocnemius muscle. The new methodology determines damping coefficients with an average error of 2.2% based on a wavelet scaling factor of 0.7. This was sufficient to detect differences in soft tissue compartment damping between the hard and soft surface. On average, the hard surface elicited a 7.02 s(-1) lower damping coefficient than the soft surface (p<0.05). A power spectral analysis of the muscular vibrations occurring during sprinting confirmed that vibrations during dynamic movements cannot be represented by a single sinusoidal function. Compared to the traditional sinusoidal approach, this newly developed method can quantify vibration damping for systems with multiple vibration modes that interfere with one another. This new time-frequency analysis may be more appropriate when an acceleration trace does not follow a sinusoidal function, as is the case with multiple forms of human locomotion. PMID:22995145

Enders, Hendrik; von Tscharner, Vinzenz; Nigg, Benno M

2012-09-18

211

Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification  

PubMed Central

The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

2012-01-01

212

Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.  

PubMed

The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

2012-11-14

213

Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation  

PubMed Central

Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

Chen, Rongda; Wang, Ze

2013-01-01

214

Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.  

PubMed

Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

Chen, Rongda; Wang, Ze

2013-07-10

215

Demonstration of Line Transect Methodologies to Estimate Urban Gray Squirrel Density  

Microsoft Academic Search

Sciurus carolinensis  ) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and\\u000a determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that\\u000a were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7

Eric W. Hein

1997-01-01

216

An iterative wavelet-based deconvolution algorithm for the restoration of ultrasound images in an EM framework  

NASA Astrophysics Data System (ADS)

The quality of medical ultrasound images is limited by inherent poor resolution due to the finite temporal bandwidth of the acoustic pulse and the non-negligible width of the system point-spread function. One of the major difficulties in designing a practical and effective restoration algorithm is to develop a model for the tissue reflectivity that can adequately capture significant image features without being computationally prohibitive. The reflectivities of biological tissues do not exhibit the piecewise smooth characteristics of natural images considered in the standard image processing literature; while the macroscopic variations in echogenicity are indeed piecewise smooth, the presence of sub-wavelength scatterers adds a pseudo-random component at the microscopic level. This observation leads us to propose modelling the tissue reflectivity as the product of a piecewise smooth echogenicity map and a unit-variance random field. The chief advantage of such an explicit representation is that it allows us to exploit representations for piecewise smooth functions (such as wavelet bases) in modelling variations in echogenicity without neglecting the microscopic pseudo-random detail. As an example of how this multiplicative model may be exploited, we propose an expectation-maximisation (EM) restoration algorithm that alternates between inverse filtering (to estimate the tissue reflectivity) and logarithmic wavelet denoising (to estimate the echogenicity map). We provide simulation and in vitro results to demonstrate that our proposed algorithm yields solutions that enjoy higher resolution, better contrast and greater fidelity to the tissue reflectivity compared with the current state-of-the-art in ultrasound image restoration.

Ng, J. K. H.; Prager, R. W.; Kingsbury, N. G.; Treece, G. M.; Gee, A. H.

2006-03-01

217

Kernel Approach to Estimation of the Sphere Radius Density in Wicksell's Corpuscle Problem.  

National Technical Information Service (NTIS)

The estimation of the probability density function of the radii of spheres in a medium, given the radii of their profiles in a random slice, known as Wicksell's corpuscle problem is considered. An estimator related to the classical kernel density estimato...

A. J. Vanes A. W. Hoogendoorn

1988-01-01

218

Colour Image Segmentation by Non-Parametric Density Estimation in Colour Space  

Microsoft Academic Search

A novel colour image segmentation routine, based on clustering pixels in colour space using non-parametric density estimation, is described. Although the basic methodology is well known, several important improvements to the previous work in this area are introduced. The density is estimated at a series of knot points in the colour space, and clustering is performed by hill climb- ing

Paul A. Bromiley; Neil A. Thacker; Patrick Courtney

2001-01-01

219

Semi-supervised learning based on high density region estimation.  

PubMed

In this paper, we consider local regression problems on high density regions. We propose a semi-supervised local empirical risk minimization algorithm and bound its generalization error. The theoretical analysis shows that our method can utilize unlabeled data effectively and achieve fast learning rate. PMID:20605081

Chen, Hong; Li, Luoqing; Peng, Jiangtao

2010-06-11

220

LIDAR density and linear interpolator effects on elevation estimates  

Microsoft Academic Search

Linear interpolation of irregularly spaced LIDAR elevation data sets is needed to develop realistic spatial models. We evaluated inverse distance weighting (IDW) and ordinary kriging (OK) interpolation techniques and the effects of LIDAR data density on the statistical validity of the linear interpolators. A series of 10 forested 1000?ha LIDAR tiles on the Lower Coastal Plain of eastern North Carolina

E. S. Anderson; J. A. Thompson; R. E. Austin

2005-01-01

221

Estimating Tropical-Forest Density Profiles from Multibaseline Interferometric SAR  

Microsoft Academic Search

Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results

Robert Treuhaft; Bruce Chapman; João Roberto dos Santos; Luciano Dutra; Fabio Gonçalves; Jose Claudio Mura; Paulo Maurício; Alencastro Graca

222

A continuous bivariate model for wind power density and wind turbine energy output estimations  

Microsoft Academic Search

The wind power probability density function is useful in both the design process of a wind turbine and in the evaluation process of the wind resource available at a potential site. The continuous probability models used in the scientific literature to estimate the wind power density distribution function and wind turbine energy output assume that air density is independent of

José Antonio Carta; Dunia Mentado

2007-01-01

223

Electrical Density Sorting and Estimation of Soluble Solids Content of Watermelon  

Microsoft Academic Search

The relationship between density and internal quality of watermelon was investigated. The density of watermelon was found to be related both to the degree of hollowness and the soluble solids content which can be used as a measure of sweetness. The soluble solids content of watermelons can be estimated from density and mass by multiple regression analysis. An optimum range

Koro Kato

1997-01-01

224

Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models  

PubMed Central

Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1–7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

Malloy, Elizabeth J.; Morris, Jeffrey S.; Adar, Sara D.; Suh, Helen; Gold, Diane R.; Coull, Brent A.

2010-01-01

225

Wavelet-based local region-of-interest reconstruction for synchrotron radiation x-ray microtomography  

SciTech Connect

Synchrotron radiation x-ray microtomography is becoming a uniquely powerful method to nondestructively access three-dimensional internal microstructure in biological and engineering materials, with a resolution of 1 {mu}m or less. The tiny field of view of the detector, however, requires that the sample has to be strictly small, which would limit the practical applications of the method such as in situ experiments. In this paper, a wavelet-based local tomography algorithm is proposed to recover a small region of interest inside a large object only using the local projections, which is motivated by the localization property of wavelet transform. Local tomography experiment for an Al-Cu alloy is carried out at SPring-8, the third-generation synchrotron radiation facility in Japan. The proposed method readily enables the high-resolution observation for a large specimen, by which the applicability of the current microtomography would be promoted to a large extent.

Li Lingqi; Toda, Hiroyuki; Ohgaki, Tomomi; Kobayashi, Masakazu; Kobayashi, Toshiro; Uesugi, Kentaro; Suzuki, Yoshio [Department of Production Systems Engineering, Toyohashi University of Technology, Toyohashi, Aichi 441-8580 (Japan); Japan Synchrotron Radiation Research Institute, Sayo-gun, Hyougo 679-5198 (Japan)

2007-12-01

226

An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report  

SciTech Connect

The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

1998-11-01

227

Design of wavelet-based ECG detector for implantable cardiac pacemakers.  

PubMed

A wavelet Electrocardiogram (ECG) detector for low-power implantable cardiac pacemakers is presented in this paper. The proposed wavelet-based ECG detector consists of a wavelet decomposer with wavelet filter banks, a QRS complex detector of hypothesis testing with wavelet-demodulated ECG signals, and a noise detector with zero-crossing points. In order to achieve high detection accuracy with low power consumption, a multi-scaled product algorithm and soft-threshold algorithm are efficiently exploited in our ECG detector implementation. Our algorithmic and architectural level approaches have been implemented and fabricated in a standard 0.35 ?m CMOS technology. The testchip including a low-power analog-to-digital converter (ADC) shows a low detection error-rate of 0.196% and low power consumption of 19.02 ?W with a 3 V supply voltage. PMID:23893202

Min, Young-Jae; Kim, Hoon-Ki; Kang, Yu-Ri; Kim, Gil-Su; Park, Jongsun; Kim, Soo-Won

2013-08-01

228

A Haar-wavelet-based Lucy-Richardson algorithm for positron emission tomography image restoration  

NASA Astrophysics Data System (ADS)

Deconvolution is an ill-posed problem that requires regularization. Noise would inevitably be enhanced during the iterative deconvolution process. The enhanced noise degrades the image quality, causing mistakes in clinical interpretations. This paper introduced a Haar-wavelet-based Lucy-Richardson algorithm (HALU) for positron emission tomography (PET) image restoration based on a spatially variant point spread function. After wavelet decomposition, Lucy-Richardson algorithm was applied to each approximation matrix with different iteration numbers. Thus, this enhanced the contrasts of our images without amplifying much of the noise level. The results showed that HALU can be able to recover the resolution and yield better contrast and lower noise level than the Lucy-Richardson algorithm.

Tam, Naomi W. P.; Lee, Jhih-Shian; Hu, Chi-Min; Liu, Ren-Shyan; Chen, Jyh-Cheng

2011-08-01

229

Leg motion classification with artificial neural networks using wavelet-based features of gyroscope signals.  

PubMed

We extract the informative features of gyroscope signals using the discrete wavelet transform (DWT) decomposition and provide them as input to multi-layer feed-forward artificial neural networks (ANNs) for leg motion classification. Since the DWT is based on correlating the analyzed signal with a prototype wavelet function, selection of the wavelet type can influence the performance of wavelet-based applications significantly. We also investigate the effect of selecting different wavelet families on classification accuracy and ANN complexity and provide a comparison between them. The maximum classification accuracy of 97.7% is achieved with the Daubechies wavelet of order 16 and the reverse bi-orthogonal (RBO) wavelet of order 3.1, both with similar ANN complexity. However, the RBO 3.1 wavelet is preferable because of its lower computational complexity in the DWT decomposition and reconstruction. PMID:22319378

Ayrulu-Erdem, Birsel; Barshan, Billur

2011-01-28

230

Wavelet-based Poisson Solver for use in Particle-In-CellSimulations  

SciTech Connect

We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.

Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

2005-05-13

231

BLACK AND BROWN BEAR DENSITY ESTIMATES USING MODIFIED CAPTURE RECAPTURE TECHNIQUES IN ALASKA  

Microsoft Academic Search

Population density estimates were obtained for sympatric black bear (Ursus americanus) and brown bear (U. arctos) populations inhabiting a search area of 1,325 km2 in south-central Alaska. Standard capture-recapture population estimation techniques were modified to correct for lack of geographic closure based on daily locations of radio-marked animals over a 7-day period. Calculated density estimates were based on available habitat

STERLING D. MILLER; EARL F. BECKER; WARREN B. BALLARD

232

COMPARISON OF TWO SAMPLING METHODS FOR ESTIMATING URBAN TREE DENSITY  

Microsoft Academic Search

Sampling can be used as a method for urban tree inventory estimation. There are several sampling methods available, and choices for urban tree inventory methods vary according to the place to be studied and the urban tree conditions. This study compared the results of simple and stratified random sampling methods with those of a total district tree census. The simple

Ivan André Alvarez; Giuliana Del Nero Velasco; Henrique Sundfeld Barbin; Ana Maria; Liner Pereira Lima

2005-01-01

233

Use of Underwater Visual Distance Sampling for Estimating Habitat-Specific Population Density  

Microsoft Academic Search

We contrasted fish abundance estimates generated from mark–recapture and underwater visual distance sampling to determine whether the latter method is a potentially valuable fisheries assessment tool. We further examined whether altering the detection function or habitat stratification and including lake characteristics such as Secchi depth, temperature, and fish density affected distance sampling estimates. Distance sampling produced estimates that were comparable

Melissa Pink; Thomas C. Pratt; Michael G. Fox

2007-01-01

234

Likelihood Cross-Validation Bandwidth Selection for Nonparametric Kernel Density Estimators.  

National Technical Information Service (NTIS)

One of the major problems in kernel density estimation, the choice of bandwidth, is addressed. The first order properties of the likelihood cross-validation bandwidth selection method, introduced by Habbema, Hermans and Van den Broek (1974) and Duin (1976...

B. Vanes

1989-01-01

235

Density Estimation in the View of Kolmogorov's Ideas in Approximation Theory.  

National Technical Information Service (NTIS)

Upper and lower bounds for the quality of (probability) density estimation are studied. Connections are established between these problems and the theory of approximation of functions. Particularly, it is demonstrated how some of Kolmogorov's concepts wor...

R. Hasminskii I. Ibragimov

1989-01-01

236

STATISTICAL ESTIMATION OF HIGHER-ORDER SPECTRAL DENSITIES BY MEANS OF GENERAL TAPERING  

Microsoft Academic Search

Given a realization on a finite interval of a continuous-time sta- tionary process, we construct estimators for higher order spectral densities. Tapering and shift-in-time methods are used to build estimators which are asymptotically unbiased and consistent for all admissible values of the argu- ment. Asymptotic results for the fourth-order densities are given. Detailed attention is paid to the nth order

M. BABA HARRA

237

Density meter algorithm and system for estimating sampling/mixing uncertainty  

SciTech Connect

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

Shine, E.P.

1986-01-01

238

Density estimation of Yangtze finless porpoises using passive acoustic sensors and automated click train detection.  

PubMed

A method is presented to estimate the density of finless porpoises using stationed passive acoustic monitoring. The number of click trains detected by stereo acoustic data loggers (A-tag) was converted to an estimate of the density of porpoises. First, an automated off-line filter was developed to detect a click train among noise, and the detection and false-alarm rates were calculated. Second, a density estimation model was proposed. The cue-production rate was measured by biologging experiments. The probability of detecting a cue and the area size were calculated from the source level, beam patterns, and a sound-propagation model. The effect of group size on the cue-detection rate was examined. Third, the proposed model was applied to estimate the density of finless porpoises at four locations from the Yangtze River to the inside of Poyang Lake. The estimated mean density of porpoises in a day decreased from the main stream to the lake. Long-term monitoring during 466 days from June 2007 to May 2009 showed variation in the density 0-4.79. However, the density was fewer than 1 porpoise/km(2) during 94% of the period. These results suggest a potential gap and seasonal migration of the population in the bottleneck of Poyang Lake. PMID:20815477

Kimura, Satoko; Akamatsu, Tomonari; Li, Songhai; Dong, Shouyue; Dong, Lijun; Wang, Kexiong; Wang, Ding; Arai, Nobuaki

2010-09-01

239

Estimation of significant wave height and wave height density function using satellite altimeter data  

Microsoft Academic Search

A technique for estimating the ocean surface roughness probability density function from satellite altimeter data is presented. Results from the application of the technique to Geos 3 altimeter data demonstrate its ability to detect both large-scale and small-scale structural deviations from the Gaussian distribution. Knowledge of the surface roughness probability density enables a direct computation of significant wave height. Significant

R. W. Priester; L. S. Miller

1979-01-01

240

Estimation of Significant Wave Height and Wave Height Density Function Using Satellite Altimeter Data  

Microsoft Academic Search

A technique for estimating the ocean surface roughness probability density function from satellite altimeter data is presented. Results from the application of the technique to Geos 3 altimeter data demonstrate its ability to detect both large-scale and small-scale structural deviations from the Gaussian distribution. Knowledge of the surface roughness probability density enables a direct computation of significant wave height. Significant

R. W. Priester; L. S. Miller

1979-01-01

241

Storage Space in Foodservice: Estimation Using a Revised Constant for the Density of Dry Foods  

Microsoft Academic Search

Foodservice facility design is a complex process. Of particular interest is the determination of the size of dry goods storage areas. One approach is to use a mathematical model estimating the density of foods in dry storage. This research was developed to determine an accurate value for the density of foods kept in dry storage areas. Using food items in

Wesley E. Luckhardt; Jeff Culbertson; Cheung Sau Wan

2001-01-01

242

Exponential Fourier densities on SO\\/3\\/ and optimal estimation and detection for rotational processes  

Microsoft Academic Search

Optimal estimation and detection schemes for discrete-time rotational processes are obtained by introducing an exponential density referred to as a rotational exponential Fourier density (REFD) defined on the group of rotations of three-dimensional space. The REFD is obtained from taking the exponential of a linear combination of the rotational harmonics, the Wigner functions, which form a complete orthogonal system on

J. T.-H. Lo; L. R. Eshleman

1979-01-01

243

The Wegner estimate and the integrated density of states for some random operators  

Microsoft Academic Search

The integrated density of states (IDS) for random operators is an important function describing many physical characteristics of a random system. Properties of the IDS are derived from the Wegner estimate that describes the influence of finite-volume perturbations on a background system. In this paper, we present a simple proof of the Wegner estimate applicable to a wide variety of

J M COMBES; P D HISLOP; F R ´ ED ´; ERIC KLOPP

2001-01-01

244

Item Response Theory with Estimation of the Latent Density Using Davidian Curves  

ERIC Educational Resources Information Center

|Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

Woods, Carol M.; Lin, Nan

2009-01-01

245

Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities  

ERIC Educational Resources Information Center

|The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

Woods, Carol M.; Thissen, David

2006-01-01

246

Statistical Characterization of Some Electrical and Mechanical Phenomena by a Neural Probability Density Function Estimation Technique  

Microsoft Academic Search

Abstract The present paper concerns the estimation of probability density func- tions using the particular parameterized class of distribution functions implemented by a single non-linear neuron, introduced in the previous contribution [12]. The estimation procedure is applied to the statistical characterization of some electrical and mechanical phenomena.

Simone Fiori; Roberto Rossi

2004-01-01

247

Validation tests of an improved kernel density estimation method for identifying disease clusters  

Microsoft Academic Search

The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity\\u000a and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters\\u000a to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to\\u000a reduce estimation bias; and

Qiang Cai; Gerard Rushton; Budhendra L Bhaduri

2011-01-01

248

Matrix-free application of Hamiltonian operators in Coifman wavelet bases.  

PubMed

A means of evaluating the action of Hamiltonian operators on functions expanded in orthogonal compact support wavelet bases is developed, avoiding the direct construction and storage of operator matrices that complicate extension to coupled multidimensional quantum applications. Application of a potential energy operator is accomplished by simple multiplication of the two sets of expansion coefficients without any convolution. The errors of this coefficient product approximation are quantified and lead to use of particular generalized coiflet bases, derived here, that maximize the number of moment conditions satisfied by the scaling function. This is at the expense of the number of vanishing moments of the wavelet function (approximation order), which appears to be a disadvantage but is shown surmountable. In particular, application of the kinetic energy operator, which is accomplished through the use of one-dimensional (1D) [or at most two-dimensional (2D)] differentiation filters, then degrades in accuracy if the standard choice is made. However, it is determined that use of high-order finite-difference filters yields strongly reduced absolute errors. Eigensolvers that ordinarily use only matrix-vector multiplications, such as the Lanczos algorithm, can then be used with this more efficient procedure. Applications are made to anharmonic vibrational problems: a 1D Morse oscillator, a 2D model of proton transfer, and three-dimensional vibrations of nitrosyl chloride on a global potential energy surface. PMID:20590186

Acevedo, Ramiro; Lombardini, Richard; Johnson, Bruce R

2010-06-28

249

A study on discrete wavelet-based noise removal from EEG signals.  

PubMed

Electroencephalogram (EEG) serves as an extremely valuable tool for clinicians and researchers to study the activity of the brain in a non-invasive manner. It has long been used for the diagnosis of various central nervous system disorders like seizures, epilepsy, and brain damage and for categorizing sleep stages in patients. The artifacts caused by various factors such as Electrooculogram (EOG), eye blink, and Electromyogram (EMG) in EEG signal increases the difficulty in analyzing them. Discrete wavelet transform has been applied in this research for removing noise from the EEG signal. The effectiveness of the noise removal is quantitatively measured using Root Mean Square (RMS) Difference. This paper reports on the effectiveness of wavelet transform applied to the EEG signal as a means of removing noise to retrieve important information related to both healthy and epileptic patients. Wavelet-based noise removal on the EEG signal of both healthy and epileptic subjects was performed using four discrete wavelet functions. With the appropriate choice of the wavelet function (WF), it is possible to remove noise effectively to analyze EEG significantly. Result of this study shows that WF Daubechies 8 (db8) provides the best noise removal from the raw EEG signal of healthy patients, while WF orthogonal Meyer does the same for epileptic patients. This algorithm is intended for FPGA implementation of portable biomedical equipments to detect different brain state in different circumstances. PMID:20865544

Asaduzzaman, K; Reaz, M B I; Mohd-Yasin, F; Sim, K S; Hussain, M S

2010-01-01

250

Online automated detection of cerebral embolic signals using a wavelet-based system.  

PubMed

Transcranial Doppler ultrasound (US) can be used to detect emboli in the cerebral circulation. We have implemented and evaluated the first online wavelet-based automatic embolic signal-detection system, based on a fast discrete wavelet transform algorithm using the Daubechies 8th order wavelet. It was evaluated using a group of middle cerebral artery recordings from 10 carotid stenosis patients, and a 1-h compilation tape from patients with particularly small embolic signals, and compared with the most sensitive commercially available software package (FS-1), which is based on a frequency-filtering approach using the Fourier transform. An optimal combination of a sensitivity of 78.4% with a specificity of 77.5% was obtained. Its overall performance was slightly below that of FS-1 (sensitivity 86.4% with specificity 85.2%), although it was superior to FS-1 for embolic signals of short duration or low energy (sensitivity 75.2% with specificity 50.5%, compared to a sensitivity of 55.6% and specificity of 55.0% for FS-1). The study has demonstrated that the fast wavelet transform can be computed online using a standard personal computer (PC), and used in a practical system to detect embolic signals. It may be particularly good for detecting short-duration low-energy signals, although a frequency filtering-based approach currently offers a higher sensitivity on an unselected data set. PMID:15183231

Marvasti, Salman; Gillies, Duncan; Marvasti, Farokh; Markus, Hugh S

2004-05-01

251

Identification of linear time-varying systems using a wavelet-based state-space method  

NASA Astrophysics Data System (ADS)

In this paper a new wavelet-based state-space method for the identification of dynamic parameters in linear time-varying systems is presented using free vibration response data and forced vibration response data. For an arbitrarily linear time-varying system, the second-order vibration differential equations are first rewritten as the first-order state equations using the state-space theory. The excitation and response signals are projected using the Daubechies wavelet scaling functions and the first-order state-space equations are transformed into simple linear equations using the orthogonality of the scaling functions. This allows the time-varying equivalent state-space system matrices at each moment to be identified directly via solving the linear equations. The system modal parameters are extracted though eigenvalue decomposition of the state-space system matrices. The stiffness and damping matrices are determined by comparing the identified equivalent system matrices with the physical system matrices. The proposed algorithm is investigated with a two-degree of freedom spring-mass-damper system and a cantilever beam. Numerical results demonstrate that the proposed method is robust and effective with regards to the identification of abruptly, smoothly and periodically time-varying parameters.

Xu, X.; Shi, Z. Y.; You, Q.

2012-01-01

252

Diagnostically lossless medical image compression via wavelet-based background noise removal  

NASA Astrophysics Data System (ADS)

Diagnostically lossless compression techniques are essential in archival and communication of medical images. In this paper, an automated wavelet-based background noise removal method, i.e. diagnostically lossless compression method, is proposed. First, the wavelet transform modulus maxima procedure products the modulus maxima image which contains sharp changes in intensity that are used to locate the edges of the images. Then the Graham Scan algorithm is used to determine the convex hull of the wavelet modulus maxima image and extract the foreground of the image, which contains the entire diagnostic region of the image. Histogram analyses are applied to the non-diagnostic region, which is approximated by the image that is outside the convex hull. After setting all pixels in the non-diagnostic region to zero intensity, a higher compression ratio, without introducing loss of any data used for the diagnosis, is achieved with UNIX utilities compress and pack, and with lossless JPEG. Furthermore, an image of smaller rectangular region containing all the diagnostic region is constructed to further improve the compression ratio achieved.

Qi, Xiaojun; Tyler, John M.; Pianykh, Oleg S.

2000-04-01

253

A wavelet-based approach to detecting liveness in fingerprint scanners  

NASA Astrophysics Data System (ADS)

In this work, a method to provide fingerprint vitality authentication, in order to improve vulnerability of fingerprint identification systems to spoofing is introduced. The method aims at detecting 'liveness' in fingerprint scanners by using the physiological phenomenon of perspiration. A wavelet based approach is used which concentrates on the changing coefficients using the zoom-in property of the wavelets. Multiresolution analysis and wavelet packet analysis are used to extract information from low frequency and high frequency content of the images respectively. Daubechies wavelet is designed and implemented to perform the wavelet analysis. A threshold is applied to the first difference of the information in all the sub-bands. The energy content of the changing coefficients is used as a quantified measure to perform the desired classification, as they reflect a perspiration pattern. A data set of approximately 30 live, 30 spoof, and 14 cadaver fingerprint images was divided with first half as a training data while the other half as the testing data. The proposed algorithm was applied to the training data set and was able to completely classify 'live' fingers from 'not live' fingers, thus providing a method for enhanced security and improved spoof protection.

Abhyankar, Aditya S.; Schuckers, Stephanie C.

2004-08-01

254

A wavelet-based image quality metric for the assessment of 3D synthesized views  

NASA Astrophysics Data System (ADS)

In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.

Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick

2013-03-01

255

Fast wavelet based functional models for transcriptome analysis with tiling arrays.  

PubMed

For a better understanding of the biology of an organism, a complete description is needed of all regions of the genome that are actively transcribed. Tiling arrays are used for this purpose. They allow for the discovery of novel transcripts and the assessment of differential expression between two or more experimental conditions such as genotype, treatment, tissue, etc. In tiling array literature, many efforts are devoted to transcript discovery, whereas more recent developments also focus on differential expression. To our knowledge, however, no methods for tiling arrays have been described that can simultaneously assess transcript discovery and identify differentially expressed transcripts. In this paper, we adopt wavelet based functional models to the context of tiling arrays. The high dimensionality of the data triggered us to avoid inference based on Bayesian MCMC methods. Instead, we introduce a fast empirical Bayes method that provides adaptive regularization of the functional effects. A simulation study and a case study illustrate that our approach is well suited for the simultaneous assessment of transcript discovery and differential expression in tiling array studies, and that it outperforms methods that accomplish only one of these tasks. PMID:22499683

Clement, Lieven; De Beuf, Kristof; Thas, Olivier; Vuylsteke, Marnik; Irizarry, Rafael A; Crainiceanu, Ciprian M

2012-01-06

256

A real-time wavelet-based video decoder using SIMD technology  

NASA Astrophysics Data System (ADS)

This paper presents a fast implementation of a wavelet-based video codec. The codec consists of motion-compensated temporal filtering (MCTF), 2-D spatial wavelet transform, and SPIHT for wavelet coefficient coding. It offers compression efficiency that is competitive to H.264. The codec is implemented in software running on a general purpose PC, using C programming language and streaming SIMD extensions intrinsics, without assembly language. This high-level software implementation allows the codec to be portable to other general-purpose computing platforms. Testing with a Pentium 4 HT at 3.6GHz (running under Linux and using the GCC compiler, version 4), shows that the software decoder is able to decode 4CIF video in real-time, over 2 times faster than software written only in C language. This paper describes the structure of the codec, the fast algorithms chosen for the most computationally intensive elements in the codec, and the use of SIMD to implement these algorithms.

Klepko, Robert; Wang, Demin

2008-03-01

257

Performance evaluation of wavelet-based face verification on a PDA recorded database  

NASA Astrophysics Data System (ADS)

The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

Sellahewa, Harin; Jassim, Sabah A.

2006-06-01

258

Density estimation of Bemisia tabaci (Hemiptera: Aleyrodidae) in a greenhouse using sticky traps in conjunction with an image processing system  

Microsoft Academic Search

Accurate forecasting of pest density is essential for effective pest management. In this study, a simple image processing system that automatically estimated the density of whiteflies on sticky traps was developed. The estimated densities of samples in a laboratory and a greenhouse were in accordance with the actual values. The detection system was especially efficient when the whitefly densities were

Mu Qiao; Jaehong Lim; Chang Woo Ji; Bu-Keun Chung; Hwang-Yong Kim; Ki-Baik Uhm; Cheol Soo Myung; Jongman Cho; Tae-Soo Chon

2008-01-01

259

Cetacean population density estimation from single fixed sensors using passive acoustics.  

PubMed

Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. PMID:21682386

Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

2011-06-01

260

Estimation of mechanical properties of panels based on modal density and mean mobility measurements  

NASA Astrophysics Data System (ADS)

The mechanical characteristics of wood panels used by instrument makers are related to numerous factors, including the nature of the wood or characteristic of the wood sample (direction of fibers, micro-structure nature). This leads to variations in Young's modulus, the mass density, and the damping coefficients. Existing methods for estimating these parameters are not suitable for instrument makers, mainly because of the need of expensive experimental setups, or complicated protocols, which are not adapted to a daily practice in a workshop. In this paper, a method for estimating Young's modulus, the mass density, and the modal loss factors of flat panels, requiring a few measurement points and an affordable experimental setup, is presented. It is based on the estimation of two characteristic quantities: the modal density and the mean mobility. The modal density is computed from the values of the modal frequencies estimated by the subspace method ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques), associated with the signal enumeration technique ESTER (ESTimation of ERror). This modal identification technique is proved to be robust in the low- and the mid-frequency domains, i.e. when the modal overlap factor does not exceed 1. The estimation of the modal parameters also enables the computation of the modal loss factor in the low- and the mid-frequency domains. An experimental fit with the theoretical expressions for the modal density and the mean mobility enables an accurate estimation of Young's modulus and the mass density of flat panels. A numerical and an experimental study show that the method is robust, and that it requires solely a few measurement points.

Elie, Benjamin; Gautier, François; David, Bertrand

2013-11-01

261

Estimation of tiger densities in India using photographic captures and recaptures  

USGS Publications Warehouse

Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

Karanth, U.; Nichols, J.D.

1998-01-01

262

Estimating detection and density of the Andean cat in the high Andes  

USGS Publications Warehouse

The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

Reppucci, J.; Gardner, B.; Lucherini, M.

2011-01-01

263

Multiscale functional connectivity estimation on low-density neuronal cultures recorded by high-density CMOS Micro Electrode Arrays.  

PubMed

We used electrophysiological signals recorded by CMOS Micro Electrode Arrays (MEAs) at high spatial resolution to estimate the functional-effective connectivity of sparse hippocampal neuronal networks in vitro by applying a cross-correlation (CC) based method and ad hoc developed spatio-temporal filtering. Low-density cultures were recorded by a recently introduced CMOS-MEA device providing simultaneous multi-site acquisition at high-spatial (21 ?m inter-electrode separation) as well as high-temporal resolution (8 kHz per channel). The method is applied to estimate functional connections in different cultures and it is refined by applying spatio-temporal filters that allow pruning of those functional connections not compatible with signal propagation. This approach permits to discriminate between possible causal influence and spurious co-activation, and to obtain detailed maps down to cellular resolution. Further, a thorough analysis of the links strength and time delays (i.e., amplitude and peak position of the CC function) allows characterizing the inferred interconnected networks and supports a possible discrimination of fast mono-synaptic propagations, and slow poly-synaptic pathways. By focusing on specific regions of interest we could observe and analyze microcircuits involving connections among a few cells. Finally, the use of the high-density MEA with low density cultures analyzed with the proposed approach enables to compare the inferred effective links with the network structure obtained by staining procedures. PMID:22516778

Maccione, Alessandro; Garofalo, Matteo; Nieus, Thierry; Tedesco, Mariateresa; Berdondini, Luca; Martinoia, Sergio

2012-04-09

264

Evaluating Changes And Estimating Seasonal Precipitation For Colorado River Basins Using Nonparametric Density Estimation  

Microsoft Academic Search

Evaluating the hydrologic impacts of climate variability due to changes in precipitation has been an important and challenging task in the field of hydrology. This requires estimation of rainfall, preserving its spatial and temporal variability. The current research focuses on 1) analyzing changes (trend\\/step) in seasonal precipitation and 2) simulating seasonal precipitation using k-nearest neighbor (k-nn) non-parametric technique for 29

A. Kalra; S. Ahmad; H. Stephen

2009-01-01

265

Effects of tissue heterogeneity on the optical estimate of breast density  

PubMed Central

Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635–1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical.

Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

2012-01-01

266

A comprehensive evaluation of the heparin-manganese precipitation procedure for estimating high density lipoprotein cholesterol  

Microsoft Academic Search

The accurate quantitation of high density lipo- proteins has recently assumed greater importance in view of studies suggesting their negative correlation with coronary heart disease. High density lipoproteins may be estimated by measuring cholesterol in the plasma frac- tion of d > 1.063 g\\/ml. A more practical approach is the specific precipitation of apolipoprotein B (apoB)-contain- ing lipoproteins by sulfated

G. Russell Warnick; John J. Albers

267

A wavelet-based neural model to optimize and read out a temporal population code  

PubMed Central

It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned wavelet-based decoders.

Luvizotto, Andre; Renno-Costa, Cesar; Verschure, Paul F. M. J.

2012-01-01

268

Wavelet-based neural network with fuzzy-logic adaptivity for nuclear image restoration  

SciTech Connect

A novel wavelet-based neural network with fuzzy-logic adaptivity (WNNFA) is proposed for image restoration using a nuclear medicine gamma camera based on the measured system point spread function. The objective is to restore image degradation due to photon scattering and collimator photon penetration with the gamma camera and allow improved quantitative external measurements of radionuclides in vivo. The specific clinical model proposed is the imaging of bremsstrahlung radiation using {sup 32}P and {sup 90}Y because of the enhanced image degradation effects of photon scattering, photon penetration and poor signal-to-noise ratio (SNR) in measurements of this type with the gamma camera. The theoretical basis for four-channel multiresolution wavelet decomposition of the nuclear image into different subimages is developed with the objective of isolating the signal from noise. A fuzzy rule is generated to train a membership function using least mean squares (LMS) to obtain an optimal balance between image restoration and the stability of the neutral network (NN), while maintaining a linear response for the camera to radioactivity dose. A multichannel modified Hopfield neural network (HNN) architecture is then proposed for multichannel image restoration using the dominant signal subimages. This algorithm model avoids the common inverse problem associated with other image restoration filters such as the Wiener filter. The relative performance of the WNNFA for image restoration is compared to a previously reported order statistic neural network hybrid (OSNNH) filter by these investigators and a traditional Weiner filter and a modified HNN using simulated degraded images with different noise levels. Quantitative metrics such as the normalized mean square error (NMSE) and SNR are used to compare filter performance.

Qian, W.; Clarke, L.P. [Univ. of South Florida, Tampa, FL (United States)

1996-10-01

269

Wavelet-based features for characterizing ventricular arrhythmias in optimizing treatment options.  

PubMed

Ventricular arrhythmias arise from abnormal electrical activity of the lower chambers (ventricles) of the heart. Ventricular tachycardia (VT) and ventricular fibrillation (VF) are the two major subclasses of ventricular arrhythmias. While VT has treatment options that can be performed in catheterization labs, VF is a lethal cardiac arrhythmia, often when detected the patient receives an implantable defibrillator which restores the normal heart rhythm by the application of electric shocks whenever VF is detected. The classification of these two subclasses are important in making a decision on the therapy performed. As in the case of all real world process the boundary between VT and VF is ill defined which might lead to many of the patients experiencing arrhythmias in the overlap zone (that might be predominately VT) to receive shocks by the an implantable defibrillator. There may also be a small population of patients who could be treated with anti-arrhythmic drugs or catheterization procedure if they can be diagnosed to suffer from predominately VT after objectively analyzing their intracardiac electrogram data obtained from implantable defibrillator. The proposed work attempts to arrive at a quantifiable way to scale the ventricular arrhythmias into VT, VF, and the overlap zone arrhythmias as VT-VF candidates using features extracted from the wavelet analysis of surface electrograms. This might eventually lead to an objective way of analyzing arrhythmias in the overlap zone and computing their degree of affinity towards VT or VF. A database of 24 human ventricular arrhythmia tracings obtained from the MIT-BIH arrhythmia database was analyzed and wavelet-based features that demonstrated discrimination between the VT, VF, and VT-VF groups were extracted. An overall accuracy of 75% in classifying the ventricular arrhythmias into 3 groups was achieved. PMID:22254473

Balasundaram, K; Masse, S; Nair, K; Farid, T; Nanthakumar, K; Umapathy, K

2011-01-01

270

Wavelet-Based Spatial Scaling of Coupled Reaction-Diffusion Fields  

SciTech Connect

Multiscale schemes for transferring information from fine to coarse scales are typically based on homogenization techniques. Such schemes smooth the fine scale features of the underlying fields, often resulting in the inability to accurately retain the fine scale correlations. In addition, higher-order statistical moments (beyond mean) of the relevant field variables are not necessarily preserved. As a superior alternative to averaging homogenization methods, a wavelet-based scheme for the exchange of information between a reactive and diffusive field in the context of multiscale reaction-diffusion problems is proposed and analyzed. The scheme is shown to be efficient in passing information along scales, from fine to coarse, i.e., upscaling as well as from coarse to fine, i.e., downscaling. It incorporates fine scale statistics (higher-order moments beyond mean), mainly due to the capability of wavelets to represent fields hierarchically. Critical to the success of the scheme is the identification of dominant scales containing the majority of the useful information. The dominant scales in effect specify the coarsest resolution possible. The scheme is applied in detail to the analysis of a diffusive system with a chemically reacting boundary. Reactions are simulated using kinetic Monte Carlo (kMC) and diffusion is solved by finite differences (FDs). Spatial scale differences are present at the interface of the kMC sites and the diffusion grid. The computational efficiency of the scheme is compared to results obtained by averaging homogenization, and to results from a benchmark scheme that ensures spatial scale parity between kMC and FD.

Mishra, Sudib [University of Arizona; Muralidharan, Krishna [University of Arizona; Deymier, Pierre [University of Arizona; Frantziskonis, G. [University of Arizona; Pannala, Sreekanth [ORNL; Simunovic, Srdjan [ORNL

2008-01-01

271

Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party  

NASA Astrophysics Data System (ADS)

The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

272

Estimating the amount and distribution of radon flux density from the soil surface in China.  

PubMed

Based on an idealized model, both the annual and the seasonal radon ((222)Rn) flux densities from the soil surface at 1099 sites in China were estimated by linking a database of soil (226)Ra content and a global ecosystems database. Digital maps of the (222)Rn flux density in China were constructed in a spatial resolution of 25 km x 25 km by interpolation among the estimated data. An area-weighted annual average (222)Rn flux density from the soil surface across China was estimated to be 29.7+/-9.4 mBq m(-2)s(-1). Both regional and seasonal variations in the (222)Rn flux densities are significant in China. Annual average flux densities in the southeastern and northwestern China are generally higher than those in other regions of China, because of high soil (226)Ra content in the southeastern area and high soil aridity in the northwestern one. The seasonal average flux density is generally higher in summer/spring than winter, since relatively higher soil temperature and lower soil water saturation in summer/spring than other seasons are common in China. PMID:18329143

Zhuo, Weihai; Guo, Qiuju; Chen, Bo; Cheng, Guan

2008-03-07

273

Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density  

PubMed Central

We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f0 = exp ?0 where ?0 is a concave function on ?. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ?) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of Hk, the “lower invelope” of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of ?0 = log f0 at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f0) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A.

2009-01-01

274

The Wegner estimate and the integrated density of states for some random operators  

Microsoft Academic Search

.  The integrated density of states (IDS) for random operators is an important function describing many physical characteristics\\u000a of a random system. Properties of the IDS are derived from the Wegner estimate that describes the influence of finite-volume\\u000a perturbations on a background system. In this paper, we present a simple proof of the Wegner estimate applicable to a wide\\u000a variety of

J. M. Combes; P. D. Hislop; Frédéric Klopp; Shu Nakamura

2002-01-01

275

Strong consistency of density estimation by orthogonal series methods for dependent variables with applications  

Microsoft Academic Search

Among several widely use methods of nonparametric density estimation is the technique of orthogonal series advocated by several\\u000a authors. For such estimate when the observations are assumed to have been taken from strong mixing sequence in the sense of\\u000a Rosenblatt [7] we study strong consistency by developing probability inequality for bounded strongly mixing random variables.\\u000a The results obtained are then

Ibrahim A. Ahmad

1979-01-01

276

Item Response Theory With Estimation of the Latent Density Using Davidian Curves  

Microsoft Academic Search

Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated, simultaneously with the item parameters of logistic item response functions, as a Davidian curve. Simulations

Carol M. Woods; Nan Lin

2009-01-01

277

An automatic iris occlusion estimation method based on high-dimensional density estimation.  

PubMed

Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651

Li, Yung-Hui; Savvides, Marios

2013-04-01

278

Autocorrelation-based estimate of particle image density in particle image velocimetry  

NASA Astrophysics Data System (ADS)

In Particle Image Velocimetry (PIV), the number of particle images per interrogation region, or particle image density, impacts the strength of the correlation and, as a result, the number of valid vectors and the measurement uncertainty. Therefore, any a-priori estimate of the accuracy and uncertainty of PIV requires knowledge of the particle image density. An autocorrelation-based method for estimating the local, instantaneous, particle image density is presented. Synthetic images were used to develop an empirical relationship based on how the autocorrelation peak magnitude varies with particle image density, particle image diameter, illumination intensity, interrogation region size, and background noise. This relationship was then tested using images from two experimental setups with different seeding densities and flow media. The experimental results were compared to image densities obtained through using a local maximum method as well as manual particle counts and are found to be robust. The effect of varying particle image intensities was also investigated and is found to affect the particle image density.

Warner, Scott O.

279

Wavelet-based SAR images despeckling using joint hidden Markov model  

NASA Astrophysics Data System (ADS)

In the past few years, wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the deficiency for taking account of intrascale correlations that exist among neighboring wavelet coefficients. In this paper, we propose to develop a joint hidden Markov model by fusing the wavelet Bayesian denoising technique with an image regularization procedure based on HMT and Markov random field (MRF). The Expectation Maximization algorithm is used to estimate hyperparameters and specify the mixture model. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. It is shown that the joint method outperforms lee filter and standard HMT techniques in terms of the integrative measure of the equivalent number of looks (ENL) and Pratt's figure of merit(FOM), especially when dealing with speckle noise in large variance.

Li, Qiaoliang; Wang, Guoyou; Liu, Jianguo; Chen, Shaobo

2007-11-01

280

Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images  

NASA Astrophysics Data System (ADS)

Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.

Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.

2008-04-01

281

On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates  

NASA Astrophysics Data System (ADS)

A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

Garber, Donald P.

1993-10-01

282

Change-point detection in time-series data by relative density-ratio estimation.  

PubMed

The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on artificial and real-world datasets including human-activity sensing, speech, and Twitter messages, we demonstrate the usefulness of the proposed method. PMID:23500502

Liu, Song; Yamada, Makoto; Collier, Nigel; Sugiyama, Masashi

2013-02-04

283

Estimation and prediction of multiple flying balls using Probability Hypothesis Density filtering  

Microsoft Academic Search

We describe a method for estimating position and velocity of multiple flying balls for the purpose of robotic ball catching. For this a multi-target recursive Bayes filter, the Gaussian Mixture Probability Hypothesis Density filter (GM-PHD), fed by a circle detector is used. This recently developed filter avoids the need to enumerate all possible data association decisions, making them computationally efficient.

Oliver Birbach; Udo Frese

2011-01-01

284

Independent Component Analysis of High-Density Electromyography in Muscle Force Estimation  

Microsoft Academic Search

Accurate force prediction from surface electromyography (EMG) forms an important methodological challenge in biomechanics and kinesiology. In a previous study (Staudenmann , 2006), we illustrated force estimates based on analyses lent from multivariate statistics. In particular, we showed the advantages of principal component analysis (PCA) on monopolar high-density EMG (HD-EMG) over conventional electrode configurations. In the present study, we further

Didier Staudenmann; Andreas Daffertshofer; Idsart Kingma; Dick F. Stegeman; Jaap H. van Dieen

2007-01-01

285

Likelihood Cross-Validation Bandwidth Selection for Nonparametric Kernel Density Estimators.  

National Technical Information Service (NTIS)

One of the major problems in kernel density estimation is the choice of bandwidth. The first order properties of the likelihood cross-validation bandwidth selection method, introduced by Habbema, Hermans and Van den Broek (1974) and Duin (1976) are review...

B. van Es

1989-01-01

286

Digitization of film for archiving through the estimation of dye densities  

NASA Astrophysics Data System (ADS)

This paper introduces an alternate standard for film archiving based on digitizing the dye densities of the film rather than the color (XYZ or RGB). The color of the film is encoded by the storage of two types of information: (1) the analytical densities of the dyes, which must only be stored once for each film; and (2) the dye concentrations for each pixel of each frame. If the analytical densities of the dyes are known, then the concentration of each dye can be estimated by measuring the logarithm of the transmission of the film in as few as three frequency bands. A formalism for accomplishing this estimation, as well as the estimation of the dye density curves, will be presented. The error of this digitization technique as a function of filter bandwidth and a function of the number of spectra transmission measurements used for the dye concentration estimation will be quantified. In addition, the impact of this technique for film restoration will be discussed.

Pringle, Lon N.; McElwain, Thomas P.; Glasgow, Bruce B.

1995-04-01

287

Estimates of meridional overturning circulation variability in the North Atlantic from surface density flux fields  

Microsoft Academic Search

A method developed recently by Grist et al. (2009) is used to obtain estimates of variability in the strength of the meridional overturning circulation (MOC) at various latitudes in the North Atlantic. The method employs water mass transformation theory to determine the surface buoyancy forced overturning circulation (SFOC) using surface density flux fields from both the Hadley Centre Coupled Model

Simon A. Josey; Jeremy P. Grist; Robert Marsh

2009-01-01

288

USING AERIAL HYPERSPECTRAL REMOTE SENSING IMAGERY TO ESTIMATE CORN PLANT STAND DENSITY  

Technology Transfer Automated Retrieval System (TEKTRAN)

Since corn plant stand density is important for optimizing crop yield, several researchers have recently developed ground-based systems for automatic measurement of this crop growth parameter. Our objective was to use data from such a system to assess the potential for estimation of corn plant stan...

289

A likelihood approach to estimating animal density from binary acoustic transects.  

PubMed

We propose an approximate maximum likelihood method for estimating animal density and abundance from binary passive acoustic transects, when both the probability of detection and the range of detection are unknown. The transect survey is purposely designed so that successive data points are dependent, and this dependence is exploited to simultaneously estimate density, range of detection, and probability of detection. The data are assumed to follow a homogeneous Poisson process in space, and a second-order Markov approximation to the likelihood is used. Simulations show that this method has small bias under the assumptions used to derive the likelihood, although it performs better when the probability of detection is close to 1. The effects of violations of these assumptions are also investigated, and the approach is found to be sensitive to spatial trends in density and clustering. The method is illustrated using real acoustic data from a survey of sperm and humpback whales. PMID:21039393

Horrocks, Julie; Hamilton, David C; Whitehead, Hal

2010-10-29

290

The role of estimation error in probability density function of soil hydraulic parameters: Pedotop scale  

NASA Astrophysics Data System (ADS)

For modeling of transport processes and for prognosis of their results, the knowledge of hydrodynamic parameters is required. Soil hydrodynamic parameters are determined in the field by methods based upon certain approximations and the procedure of inverse solution is applied. The estimate of a parameter P includes therefore an error e. Hydrodynamic parameters are variable to a different degree like other soil properties even over the region of one pedotaxon and the knowledge of their probability density function PDF is frequently required. We have used five approximate infiltration equations for the estimation of sorptivity S and saturated hydraulic conductivity K. Distribution of both parameters was determined with regard to the type of applied infiltration equation. PDF of parameters was not identical when we compared the parameter estimates derived by various infiltration equations. As it follows from this comparative study, the estimation error e deforms PDF of the parameter estimates.

Kutílek, Miroslav; Krejca, Miroslav; Kupcová-Vlašimská, Jana

291

Surface estimates of the Atlantic overturning in density space in an eddy-permitting ocean model  

NASA Astrophysics Data System (ADS)

A method to estimate the variability of the Atlantic meridional overturning circulation (AMOC) from surface observations is investigated using an eddy-permitting ocean-only model (ORCA-025). The approach is based on the estimate of dense water formation from surface density fluxes. Analysis using 78 years of two repeat forcing model runs reveals that the surface forcing-based estimate accounts for over 60% of the interannual AMOC variability in ?0 coordinates between 37°N and 51°N. The analysis provides correlations between surface-forced and actual overturning that exceed those obtained in an earlier analysis of a coarser resolution-coupled model. Our results indicate that, in accordance with theoretical considerations behind the method, it provides a better estimate of the overturning in density coordinates than in z coordinates in subpolar latitudes. By considering shorter segments of the model run, it is shown that correlations are particularly enhanced by the method's ability to capture large decadal scale AMOC fluctuations. The inclusion of the anomalous Ekman transport increases the amount of variance explained by an average 16% throughout the North Atlantic and provides the greatest potential for estimating the variability of the AMOC in density space between 33°N and 54°N. In that latitude range, 70-84% of the variance is explained and the root-mean-square difference is less than 1 Sv when the full run is considered.

Grist, Jeremy P.; Josey, Simon A.; Marsh, Robert

2012-06-01

292

Information from dust continuum data: Estimation of temperature, spectral index, and column density  

NASA Astrophysics Data System (ADS)

Sub-millimetre continuum data provide information on the column density and dust properties of interstellar clouds. We have compared methods that can be used to derive high-resolution column density maps from, e.g., Herschel measurements. We also have investigated the estimation of dust colour temperature and emissivity spectral index. Radiative transfer models are used to study the differences between the true and apparent cloud properties and to compare the performance of analysis methods. Models show that, because of the nature of spatial temperature variations, externally heated clouds tend to show an artificial positive correlation between colour temperature and spectral index. For clouds with internal heating, the situation is reversed. Analysis of observations is affected by observational noise that also can produce a negative correlation. We find that, compared to direct least squares fitting of modified black body spectra, Bayesian methods and hierarchical statistical models are more accurate although not completely unbiased. Hierarchical models can mask possible local variation in the relation between temperature and spectral index. Palmeirim et al. (2013) derived high-resolution column density maps using a combination of estimates obtained in different wavelength ranges. The method is quite reliable although somewhat sensitive to noise. For clouds with a simple density structure, radiative transfer modelling provides the most accurate estimates. As a simpler alternative, we propose modelling that consists of high-resolution column density and temperature maps that are matched to observations through convolution. The method is able to produce reliable column density estimates even at super-resolution. The method is computationally demanding but still feasible even in the analysis of large Herschel maps.

Juvela, Mika; Montillaud, Julien; Ysard, Nathalie; Malinen, Johanna; Lunttila, Tuomas

2013-07-01

293

Estimating Effective Data Density in a Satellite Retrieval or an Objective Analysis.  

NASA Astrophysics Data System (ADS)

An attempt is made to formulate consistent objective definitions of the concept of `effective data density' applicable both in the context of satellite soundings more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert `spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the `degrees of freedom for signal,' a diagnostic that is invariant to changes of units or coordinates used.

Purser, R. J.; Huang, H.-L.

1993-06-01

294

Drive counts as a method of estimating ungulate density in forests: mission impossible?  

PubMed

Although drive counts are frequently used to estimate the size of deer populations in forests, little is known about how counting methods or the density and social organization of the deer species concerned influence the accuracy of the estimates obtained, and hence their suitability for informing management decisions. As these issues cannot readily be examined for real populations, we conducted a series of 'virtual experiments' in a computer simulation model to evaluate the effects of block size, proportion of forest counted, deer density, social aggregation and spatial auto-correlation on the accuracy of drive counts. Simulated populations of red and roe deer were generated on the basis of drive count data obtained from Polish commercial forests. For both deer species, count accuracy increased with increasing density, and decreased as the degree of aggregation, either demographic or spatial, within the population increased. However, the effect of density on accuracy was substantially greater than the effect of aggregation. Although improvements in accuracy could be made by reducing the size of counting blocks for low-density, aggregated populations, these were limited. Increasing the proportion of the forest counted led to greater improvements in accuracy, but the gains were limited compared with the increase in effort required. If it is necessary to estimate the deer population with a high degree of accuracy (e.g. within 10% of the true value), drive counts are likely to be inadequate whatever the deer density. However, if a lower level of accuracy (within 20% or more) is acceptable, our study suggests that at higher deer densities (more than ca. five to seven deer/100 ha) drive counts can provide reliable information on population size. PMID:21765532

Borkowski, Jakub; Palmer, Stephen C F; Borowski, Zbigniew

2011-01-29

295

A wavelet-based regularization scheme for HRR radar profile enhancement for use in automatic target recognition  

NASA Astrophysics Data System (ADS)

Cetin has applied non-quadratic optimization methods to produce feature enhanced high range resolution (HRR) radar profiles. This work concerned ground based targets and was carried out in the temporal domain. In this paper, we propose a wavelet-based-half-quadratic technique for ground-to-air target identification. The method is tested on simulated data generated by standard techniques. This analysis shows the ability of the proposed method to recover high-resolution features such as the locations and amplitudes of the dominant scatterers in the HRR profile. This suggests that the technique potentially may help improve the performance of HRR target recognition systems.

Morris, Hedley C.; DePass, Monica M.

2004-08-01

296

Wavelet-based image registration technique for high-resolution remote sensing images  

NASA Astrophysics Data System (ADS)

Image registration is the process of geometrically aligning one image to another image of the same scene taken from different viewpoints at different times or by different sensors. It is an important image processing procedure in remote sensing and has been studied by remote sensing image processing professionals for several decades. Nevertheless, it is still difficult to find an accurate, robust, and automatic image registration method, and most existing image registration methods are designed for a particular application. High-resolution remote sensing images have made it more convenient for professionals to study the Earth; however, they also create new challenges when traditional processing methods are used. In terms of image registration, a number of problems exist in the registration of high-resolution images: (1) the increased relief displacements, introduced by increasing the spatial resolution and lowering the altitude of the sensors, cause obvious geometric distortion in local areas where elevation variation exists; (2) precisely locating control points in high-resolution images is not as simple as in moderate-resolution images; (3) a large number of control points are required for a precise registration, which is a tedious and time-consuming process; and (4) high data volume often affects the processing speed in the image registration. Thus, the demand for an image registration approach that can reduce the above problems is growing. This study proposes a new image registration technique, which is based on the combination of feature-based matching (FBM) and area-based matching (ABM). A wavelet-based feature extraction technique and a normalized cross-correlation matching and relaxation-based image matching techniques are employed in this new method. Two pairs of data sets, one pair of IKONOS panchromatic images from different times and the other pair of images consisting of an IKONOS panchromatic image and a QuickBird multispectral image, are used to evaluate the proposed image registration algorithm. The experimental results show that the proposed algorithm can select sufficient control points semi-automatically to reduce the local distortions caused by local height variation, resulting in improved image registration results.

Hong, Gang; Zhang, Yun

2008-12-01

297

A dynamic self-adaptive wavelet-based method for multiscale electronic structure calculations  

NASA Astrophysics Data System (ADS)

Ab initio electronic structure calculations have been successfully used in the study of material properties. This success has generated an interest in applying such methods to the study of larger systems and/or those with a multiscale character. The difficulties encountered with conventional methods have made such applications difficult, and it is in this context that alternative bases are presently being explored. It is the use of a wavelet basis which is investigated in this thesis. The properties of a wavelet basis are impressive, and such a basis provides a multiresolution analysis which may be used for efficient multiscale representation. The use of a wavelet basis in ab initio electronic structure calculations has been explored. However, the ultimate usefulness of a wavelet-based method has remained an open question. The critical stumbling block has been the necessity of performing local real-space operations, where the choice for an appropriate secondary real-space representation has been unclear. Such a representation should possess the same multiscale properties as the wavelet basis itself. This problem is resolved in this thesis by the construction of a generalized Haar basis which is introduced as a secondary representation, complimentary to the primary wavelet basis. A self-adaptive wavelet refinement cycle is synthesized with an iterative relaxation method to allow for the generation of an optimal compressed wavelet basis simultaneously with the solution of a given problem. This basis generation follows the information obtained throughout the calculation, and is not directed by any external controls. The method employs multiple basis sets which are allowed to adapt to the immediate demands of the solution, or a particular operation. Most importantly, the algorithms developed are multiscale in implementation, and not simply in theory. This standard is rigorously maintained and places a tremendous burden on the computational complexity and necessitated the creation of a massive library of multiscale routines that are quite general and applicable beyond the scope of this work. Results are presented for the model calculations of hydrogen and helium atoms using a general three-dimensional bare Coulomb potential, as well as a three-dimensional quantum dot.

Richie, David Allen, Jr.

298

Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations  

NASA Astrophysics Data System (ADS)

Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

2009-04-01

299

Analysis of percent density estimates from digital breast tomosynthesis projection images  

NASA Astrophysics Data System (ADS)

Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PDM) and from individual DBT projections (PDT). We observed good agreement between PDM and PDT from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PDT estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.

Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.

2007-03-01

300

Mammographic density and estimation of breast cancer risk in intermediate risk population.  

PubMed

It is not clear to what extent mammographic density represents a risk factor for breast cancer among women with moderate risk for disease. We conducted a population-based study to estimate the independent effect of breast density on breast cancer risk and to evaluate the potential of breast density as a marker of risk in an intermediate risk population. From November 2006 to April 2009, data that included American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) breast density categories and risk information were collected on 52,752 women aged 50-69 years without previously diagnosed breast cancer who underwent screening mammography examination. A total of 257 screen-detected breast cancers were identified. Logistic regression was used to assess the effect of breast density on breast carcinoma risk and to control for other risk factors. The risk increased with density and the odds ratio for breast cancer among women with dense breast (heterogeneously and extremely dense breast), was 1.9 (95% confidence interval, 1.3-2.8) compared with women with almost entirely fat breasts, after adjustment for age, body mass index, age at menarche, age at menopause, age at first childbirth, number of live births, use of oral contraceptive, family history of breast cancer, prior breast procedures, and hormone replacement therapy use that were all significantly related to breast density (p < 0.001). In multivariate model, breast cancer risk increased with age, body mass index, family history of breast cancer, prior breast procedure and breast density and decreased with number of live births. Our finding that mammographic density is an independent risk factor for breast cancer indicates the importance of breast density measurements for breast cancer risk assessment also in moderate risk populations. PMID:23173778

Tesic, Vanja; Kolaric, Branko; Znaor, Ariana; Kuna, Sanja Kusacic; Brkljacic, Boris

2012-11-23

301

Efficient sample density estimation by combining gridding and an optimized kernel.  

PubMed

The reconstruction of non-Cartesian k-space trajectories often requires the estimation of nonuniform sampling density. Particularly for 3D, this calculation can be computationally expensive. The method proposed in this work combines an iterative algorithm previously proposed by Pipe and Menon (Magn Reson Med 1999;41:179-186) with the optimal kernel design previously proposed by Johnson and Pipe (Magn Reson Med 2009;61:439-447). The proposed method shows substantial time reductions in estimating the densities of center-out trajectories, when compared with that of Johnson. It is demonstrated that, depending on the trajectory, the proposed method can provide reductions in execution time by factors of 12 to 85. The method is also shown to be robust in areas of high trajectory overlap, when compared with two analytical density estimation methods, producing a 10-fold increase in accuracy in one case. Initial conditions allow the proposed method to converge in fewer iterations and are shown to be flexible in terms of the accuracy of information supplied. The proposed method is not only one of the fastest and most accurate algorithms, it is also completely generic, allowing any arbitrary trajectory to be density compensated extemporaneously. The proposed method is also simple and can be implemented on parallel computing platforms in a straightforward manner. PMID:21688320

Zwart, Nicholas R; Johnson, Kenneth O; Pipe, James G

2011-06-17

302

Automated estimation of breast density on mammogram using combined information of histogram statistics and boundary gradients  

NASA Astrophysics Data System (ADS)

This paper presents an automated scheme for breast density estimation on mammogram using statistical and boundary information. Breast density is regarded as a meaningful indicator for breast cancer risk, but measurement of breast density still relies on the qualitative judgment of radiologists. Therefore, we attempted to develop an automated system achieving objective and quantitative measurement. For preprocessing, we first segmented the breast region, performed contrast stretching, and applied median filtering. Then, two features were extracted: statistical information including standard deviation of fat and dense regions in breast area and boundary information which is the edge magnitude of a set of pixels with the same intensity. These features were calculated for each intensity level. By combining these features, the optimal threshold was determined which best divided the fat and dense regions. For evaluation purpose, 80 cases of Full-Field Digital Mammography (FFDM) taken in our institution were utilized. Two observers conducted the performance evaluation. The correlation coefficients of the threshold and percentage between human observer and automated estimation were 0.9580 and 0.9869 on average, respectively. These results suggest that the combination of statistic and boundary information is a promising method for automated breast density estimation.

Kim, Youngwoo; Kim, Changwon; Kim, Jong-Hyo

2010-03-01

303

Density estimation in a wolverine population using spatial capture-recapture models  

USGS Publications Warehouse

Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.

2011-01-01

304

Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration  

PubMed Central

We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation.

Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

2010-01-01

305

Improved estimators for quantum Monte Carlo calculation of spherically averaged intracule densities  

NASA Astrophysics Data System (ADS)

System-averaged pair densities or ``intracule densities'' are important for qualitative and quantitative descriptions of electron correlation [1] In quantum Monte Carlo (QMC) simulations, spherically averaged intracule densities are usually calculated by means of the traditional histogram technique (i.e., by counting the number of times two electrons are found at a certain distance) that is very noisy at short electron-electron distances. We will show how previously-used improved estimators for the on-top pair density [2,3] can be generalized to the case of non-vanishing electron-electron distances, as an application of the ``zero-variance'' procedure [4]. The obtained estimators lead to noise several orders of magnitude smaller than the histogram technique, allowing unprecedented fast and accurate calculations of intracule densities in QMC. Illustrative calculations on simple atomic systems will be given. [1] J. M. Mercero, E. Valderrama and J. M. Ugalde, in ``NATO-ASI Series in Metal-Ligand Interaction in Molecular-, Nano-, Micro, and Macro-systems in Complex Environments'', Ed.: N. Russo, D. R. Salahub and M. Witko, Kluwer Academic Publishres, Dordrecht (2003). [2] P. Langfelder, S. M. Rothstein and J. Vrbik, J. Chem. Phys. 107, 8525 (1997). [3] A. Sarsa, F. J. G'alvez and E. Buend'ia, J. Chem. Phys. 109, 7075 (1998). [4] R. Assaraf and M. Caffarel, Phys. Rev. Lett. 83, 4682 (1999).

Toulouse, Julien; Assaraf, Roland; Umrigar, Cyrus

2006-03-01

306

Singular value decomposition and density estimation for filtering and analysis of gene expression  

SciTech Connect

We present three algorithms for gene expression analysis. Algorithm 1, known as serial correlation test, is used for filtering out noisy gene expression profiles. Algorithm 2 and 3 project the gene expression profiles into 2-dimensional expression subspaces ident ifiecl by Singular Value Decomposition. Density estimates a e used to determine expression profiles that have a high correlation with the subspace and low levels of noise. High density regions in the projection, clusters of co-expressed genes, are identified. We illustrate the algorithms by application to the yeast cell-cycle data by Cho et.al. and comparison of the results.

Rechtsteiner, A. (Andreas); Gottardo, R. (Raphael); Rocha, L. M. (Luis Mateus); Wall, M. E. (Michael E.)

2003-01-01

307

Estimation of the local density of states on a quantum computer  

SciTech Connect

We report an efficient quantum algorithm for estimating the local density of states (LDOS) on a quantum computer. The LDOS describes the redistribution of energy levels of a quantum system under the influence of a perturbation. Sometimes known as the 'strength function' from nuclear spectroscopy experiments, the shape of the LDOS is directly related to the survivial probability of unperturbed eigenstates, and has recently been related to the fidelity decay (or 'Loschmidt echo') under imperfect motion reversal. For quantum systems that can be simulated efficiently on a quantum computer, the LDOS estimation algorithm enables an exponential speedup over direct classical computation.

Emerson, Joseph; Cory, David [Department of Nuclear Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Lloyd, Seth [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Poulin, David [Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L 3G1 (Canada)

2004-05-01

308

Haar wavelet in estimating depth profile of soil temperature  

Microsoft Academic Search

A Haar wavelet based method for the estimation of soil temperature at different depths is described in this paper. Diurnal variation in the hourly soil temperature is estimated at different depths varying from 0 to 45cm. This estimation is compared with observed data available for Trombay site at the depths 5, 10, and 20cm. This Haar technique can be interpreted

G. Hariharan; K. Kannan; Kal Renganathan Sharma

2009-01-01

309

Targeted maximum likelihood estimation for marginal time-dependent treatment effects under density misspecification.  

PubMed

Targeted maximum likelihood methods have been proposed to estimate treatment effects for longitudinal data in the presence of time-dependent confounders. This class of methods has been mathematically proven to be doubly robust and to optimize the asymptotic estimating efficiency among the class of regular, semi-parametric estimators when all estimated density components are correctly specified. We show that methods previously proposed to build a one-step estimator with a logistic loss function generalize to a generalized linear loss function, and so may be applied naturally to an outcome that can be described by any exponential family member. We evaluate several methods for estimating unstructured marginal treatment effects for data with two time intervals in a simulation study, showing that these estimators have competitively low bias and variance in an array of misspecified situations, and can be made to perform well under near-positivity violations. We apply the methods to the PROmotion of Breastfeeding Intervention Trial data, demonstrating that longer term breastfeeding can protect infants from gastrointestinal infection. PMID:22797173

Schnitzer, Mireille E; Moodie, Erica E M; Platt, Robert W

2012-07-12

310

Uncertainty quantification techniques for population density estimates derived from sparse open source data  

NASA Astrophysics Data System (ADS)

The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

2013-05-01

311

Estimation of high-resolution dust column density maps. Empirical model fits  

NASA Astrophysics Data System (ADS)

Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.

Juvela, M.; Montillaud, J.

2013-09-01

312

Georadar-derived estimates of firn density in the percolation zone, western Greenland ice sheet  

NASA Astrophysics Data System (ADS)

Greater understanding of variations in firn densification is needed to distinguish between dynamic and melt-driven elevation changes on the Greenland ice sheet. This is especially true in Greenland's percolation zone, where firn density profiles are poorly documented because few ice cores are extracted in regions with surface melt. We used georadar to investigate firn density variations with depth along a ˜70 km transect through a portion of the accumulation area in western Greenland that partially melts. We estimated electromagnetic wave velocity by inverting reflection traveltimes picked from common midpoint gathers. We followed a procedure designed to find the simplest velocity versus depth model that describes the data within estimated uncertainty. On the basis of the velocities, we estimated 13 depth-density profiles of the upper 80 m using a petrophysical model based on the complex refractive index method equation. At the highest elevation site, our density profile is consistent with nearby core data acquired in the same year. Our profiles at the six highest elevation sites match an empirically based densification model for dry firn, indicating relatively minor amounts of water infiltration and densification by melt and refreeze in this higher region of the percolation zone. At the four lowest elevation sites our profiles reach ice densities at substantially shallower depths, implying considerable meltwater infiltration and ice layer development in this lower region of the percolation zone. The separation between these two regions is 8 km and spans 60 m of elevation, which suggests that the balance between dry-firn and melt-induced densification processes is sensitive to minor changes in melt.

Brown, Joel; Bradford, John; Harper, Joel; Pfeffer, W. Tad; Humphrey, Neil; Mosley-Thompson, Ellen

2012-01-01

313

Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals  

USGS Publications Warehouse

Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

2011-01-01

314

Integrated mean square properties of density estimation by orthogonal series methods for dependent variables  

Microsoft Academic Search

Summary  The rates at which integrated mean square and mean squre errors of nonparametric density estimation by orthogonal series method\\u000a for sequences of strictly stationary strong mixing random variables are obtained. These rates are better than those known\\u000a to hold for the independent case and they are shown to hold for Markov processes. In fact our results when specialized to\\u000a the

Ibrahim A. Ahmad

1982-01-01

315

Estimation of Boar Sperm Status Using Intracellular Density Distribution in Grey Level Images  

Microsoft Academic Search

In this work we review three methods proposed to estimate the fraction of alive sperm cells in boar semen samples. Images\\u000a of semen samples are acquired, preprocessed and segmented in order to obtain images of single sperm heads. A model of intracellular\\u000a density distribution characteristic of alive cells is computed by averaging a set of images of cells assumed to

Lidia Sánchez; Nicolai Petkov

2009-01-01

316

Data-Driven Bandwidth Choice for Density Estimation Based on Dependent Data  

Microsoft Academic Search

The bandwidth selection problem in kernel density estimation is investigated in situations where the observed data are dependent. The classical leave-out technique is extended, and thereby a class of cross-validated bandwidths is defined. These bandwidths are shown to be asymptotically optimal under a strong mixing condition. The leave-one out, or ordinary, form of cross-validation remains asymptotically optimal under the dependence

Jeffrey D. Hart; Philippe Vieu

1990-01-01

317

Estimating the Nucleus Bulk Density of Comet 81P/Wild 2  

NASA Astrophysics Data System (ADS)

During the successful NASA Stardust flyby of Comet 81P/Wild 2 in January 2004, a wealth of information about the object was collected. To further deepen our knowledge about this comet, modeling of its non-gravitational force has been made, in order to estimate the nucleus bulk density. The nucleus is modeled as a triaxial ellipsoid, covered by active and inactive regions, having the dimensions and spin axis orientation as measured during the flyby. The bulk density is estimated by requiring that a model nucleus simultaneously must reproduce the empirical water production rate (Q {H2O}) and non-gravitational changes in the orbital period and the longitude of perihelion. For a postulated rotational period of 12 hours, using a spin axis obliquity and argument of 124o and 333o, respectively [Sekanina et al. 2004, Science vol. 304, pp. 1769], we find a bulk density of at most 360 {kg m-3}, although nominal Q {H2O} measurements are poorly reproduced. For the reversed spin axis orientation, a very good Q {H2O} reproduction can be made, resulting in a density of 380-600 {kg m-3} (the upper limit increases to 760 {kg m-3} if worse Q {H2O}-fits are considered). The dependence of these results on the applied thermophysical model and rotational period is discussed. This research has been conducted with financial support from the European Space Agency (ESA).

Davidsson, B. J. R.; Gutierrez, P. J.

2004-11-01

318

Fracture density estimation from petrophysical log data using the adaptive neuro-fuzzy inference system  

NASA Astrophysics Data System (ADS)

Fractures as the most common and important geological features have a significant share in reservoir fluid flow. Therefore, fracture detection is one of the important steps in fractured reservoir characterization. Different tools and methods are introduced for fracture detection from which formation image logs are considered as the common and effective tools. Due to the economical considerations, image logs are available for a limited number of wells in a hydrocarbon field. In this paper, we suggest a model to estimate fracture density from the conventional well logs using an adaptive neuro-fuzzy inference system. Image logs from two wells of the Asmari formation in one of the SW Iranian oil fields are used to verify the results of the model. Statistical data analysis indicates good correlation between fracture density and well log data including sonic, deep resistivity, neutron porosity and bulk density. The results of this study show that there is good agreement (correlation coefficient of 98%) between the measured and neuro-fuzzy estimated fracture density.

Ja'fari, Ahmad; Kadkhodaie-Ilkhchi, Ali; Sharghi, Yoosef; Ghanavati, Kiarash

2012-02-01

319

Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.  

PubMed

In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

2011-03-01

320

Analysis of Residence Time in Shopping Using RFID Data -- An Application of the Kernel Density Estimation to RFID  

Microsoft Academic Search

This study shows a method of determining and visualizing the existence probability of customers from shopping-path data in supermarkets using a database collected by a RFID (Radio Frequency Identification) technique, which allows us to analyze the detailed behaviors of customers. First, we present a method to estimate customer existence probability density on the sales floor using a Kernel density estimation.

Shinya Miyazaki; Takashi Washio; Katsutoshi Yada

2011-01-01

321

Use of forest inventories and geographic information systems to estimate biomass density of tropical forests: Application to tropical Africa  

Microsoft Academic Search

One of the most important databases needed for estimating emissions of carbon dioxide resulting from changes in the cover, use, and management of tropical forests is the total quantity of biomass per unit area, referred to as biomass density. Forest inventories have been shown to be valuable sources of data for estimating biomass density, but inventories for the tropics are

S. Brown; G. Gaston

1995-01-01

322

Estimated carbon dioxide emissions from tropical deforestation improved by carbon-density maps  

NASA Astrophysics Data System (ADS)

Deforestation contributes 6-17% of global anthropogenic CO2 emissions to the atmosphere. Large uncertainties in emission estimates arise from inadequate data on the carbon density of forests and the regional rates of deforestation. Consequently there is an urgent need for improved data sets that characterize the global distribution of aboveground biomass, especially in the tropics. Here we use multi-sensor satellite data to estimate aboveground live woody vegetation carbon density for pan-tropical ecosystems with unprecedented accuracy and spatial resolution. Results indicate that the total amount of carbon held in tropical woody vegetation is 228.7PgC, which is 21% higher than the amount reported in the Global Forest Resources Assessment 2010 (ref. ). At the national level, Brazil and Indonesia contain 35% of the total carbon stored in tropical forests and produce the largest emissions from forest loss. Combining estimates of aboveground carbon stocks with regional deforestation rates we estimate the total net emission of carbon from tropical deforestation and land use to be 1.0PgCyr-1 over the period 2000-2010--based on the carbon bookkeeping model. These new data sets of aboveground carbon stocks will enable tropical nations to meet their emissions reporting requirements (that is, United Nations Framework Convention on Climate Change Tier 3) with greater accuracy.

Baccini, A.; Goetz, S. J.; Walker, W. S.; Laporte, N. T.; Sun, M.; Sulla-Menashe, D.; Hackler, J.; Beck, P. S. A.; Dubayah, R.; Friedl, M. A.; Samanta, S.; Houghton, R. A.

2012-03-01

323

Integrated Bayesian Estimation of Zeff in the TEXTOR Tokamak from Bremsstrahlung and CX Impurity Density Measurements  

SciTech Connect

The validation of diagnostic date from a nuclear fusion experiment is an important issue. The concept of an Integrated Data Analysis (IDA) allows the consistent estimation of plasma parameters from heterogeneous data sets. Here, the determination of the ion effective charge (Zeff) is considered. Several diagnostic methods exist for the determination of Zeff, but the results are in general not in agreement. In this work, the problem of Zeff estimation on the TEXTOR tokamak is approached from the perspective of IDA, in the framework of Bayesian probability theory. The ultimate goal is the estimation of a full Zeff profile that is consistent both with measured bremsstrahlung emissivities, as well as individual impurity spectral line intensities obtained from Charge Exchange Recombination Spectroscopy (CXRS). We present an overview of the various uncertainties that enter the calculation of a Zeff profile from bremsstrahlung date on the one hand, and line intensity data on the other hand. We discuss a simple linear and nonlinear Bayesian model permitting the estimation of a central value for Zeff and the electron density ne on TEXTOR from bremsstrahlung emissivity measurements in the visible, and carbon densities derived from CXRS. Both the central Zeff and ne are sampled using an MCMC algorithm. An outlook is given towards possible model improvements.

Verdoolaege, G.; Oost, G. van [Department of Applied Physics, Ghent University, Rozier 44, 9000 Gent (Belgium); Hellermann, M. G. von; Jaspers, R. [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM-FOM, PO Box 1207, 3430 BE Nieuwegein (Netherlands); Ichir, M. M. [Laboratoire des Signaux et Systemes, Unite mixte de recherche 8506 (CNRS-Supelec-UPS), Supelec, Plateau de Moulon, 91192 Gif-sur-Yvette (France)

2006-11-29

324

Performance and Decoder Complexity Estimates for Families of Low-Density Parity-Check Codes  

NASA Astrophysics Data System (ADS)

We present methods to estimate code performance and decoder complexity from the code rate, block size, and word-error rate, for families of related low-density parity-check (LDPC) codes. Performance estimates are generally within a couple tenths of a decibel of results determined by simulation; estimates of complexity (and hence decoder speed) are generally within 10 percent. Experimental data show that there is a trade-off between complexity and code performance determined by the design of the LDPC code, and that each 1 dB (26 percent) of increased complexity is worth about a 0.1-dB reduction in the required signal-to-noise ratio.

Dolinar, S.; Andrews, K.

2007-02-01

325

[Krigle estimation and its simulated sampling of Chilo suppressalis population density].  

PubMed

In order to draw up a rational sampling plan for the larvae population of Chilo suppressalis, an original population and its two derivative populations, random population and sequence population, were sampled and compared with random sampling, gap-range-random sampling, and a new systematic sampling integrated Krigle interpolation and random original position. As for the original population whose distribution was up to aggregative and dependence range in line direction was 115 cm (6.9 units), gap-range-random sampling in line direction was more precise than random sampling. Distinguishing the population pattern correctly is the key to get a better precision. Gap-range-random sampling and random sampling are fit for aggregated population and random population, respectively, but both of them are difficult to apply in practice. Therefore, a new systematic sampling named as Krigle sample (n = 441) was developed to estimate the density of partial sample (partial estimation, n = 441) and population (overall estimation, N = 1500). As for original population, the estimated precision of Krigle sample to partial sample and population was better than that of investigation sample. With the increase of the aggregation intensity of population, Krigel sample was more effective than investigation sample in both partial estimation and overall estimation in the appropriate sampling gap according to the dependence range. PMID:15506091

Yuan, Zheming; Bai, Lianyang; Wang, Kuiwu; Hu, Xiangyue

2004-07-01

326

Estimation of scattering phase function utilizing laser Doppler power density spectra  

NASA Astrophysics Data System (ADS)

A new method for the estimation of the light scattering phase function of particles is presented. The method allows us to measure the light scattering phase function of particles of any shape in the full angular range (0°-180°) and is based on the analysis of laser Doppler (LD) power density spectra. The theoretical background of the method and results of its validation using data from Monte Carlo simulations will be presented. For the estimation of the scattering phase function, a phantom measurement setup is proposed containing a LD measurement system and a simple model in which a liquid sample flows through a glass tube fixed in an optically turbid material. The scattering phase function estimation error was thoroughly investigated in relation to the light scattering anisotropy factor g. The error of g estimation is lower than 10% for anisotropy factors larger than 0.5 and decreases with increase of the anisotropy factor (e.g. for g = 0.98, the error of estimation is 0.01%). The analysis of influence of the noise in the measured LD spectrum showed that the g estimation error is lower than 1% for signal to noise ratio higher than 50 dB.

Wojtkiewicz, S.; Liebert, A.; Rix, H.; Sawosz, P.; Maniewski, R.

2013-02-01

327

Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert  

USGS Publications Warehouse

We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

Crowe, D. E.; Longshore, K. M.

2010-01-01

328

A strategy for analysis of (molecular) equilibrium simulations: Configuration space density estimation, clustering, and visualization  

NASA Astrophysics Data System (ADS)

We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.

Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.

2001-02-01

329

Nonparametric density estimation and optimal bandwidth selection for protein unfolding and unbinding data  

NASA Astrophysics Data System (ADS)

Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the ``optimal bandwidth'' for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.

Bura, E.; Zhmurov, A.; Barsegov, V.

2009-01-01

330

A wavelet-based spectral analysis of long-term time series of optical properties of aerosols obtained by lidar and radiometer measurements over an urban station in Western India  

NASA Astrophysics Data System (ADS)

Over 700 weekly-spaced vertical profiles of aerosol number density have been archived during 14-year period (October 1986-September 2000) using a bi-static Argon ion lidar system at the Indian Institute of Tropical Meteorology, Pune (18°43?N, 73°51?E, 559 m above mean sea level), India. The monthly resolved time series of aerosol distributions within the atmospheric boundary layer as well as at different altitudes aloft have been subjected to the wavelet-based spectral analysis to investigate different characteristic periodicities present in the long-term dataset. The solar radiometric aerosol optical depth (AOD) measurements over the same place during 1998-2003 have also been analyzed with the wavelet technique. Wavelet spectra of both the time series exhibited significant quasi-annual (around 12-14 months) and quasi-biennial (around 22-25 months) oscillations at statistically significant level. An overview on the lidar and radiometric data sets including the wavelet-based spectral analysis procedure is also presented. A brief statistical analysis concerning both annual and interannual variability of lidar and radiometer derived aerosol distributions has been performed to delineate the effect of different dominant seasons and associated meteorological conditions prevailing over the experimental site in Western India. Additionally, the impact of urbanization on the long-term trends in the lidar measurements of aerosol loadings over the experimental site is brought out. This was achieved by using the lidar observations and a preliminary data set built for inferring the urban aspects of the city of Pune, which included population, number of industries and vehicles etc. in the city.

Pal, S.; Devara, P. C. S.

2012-08-01

331

Density-based load estimation using two-dimensional finite element models: a parametric study.  

PubMed

A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530

Bona, Max A; Martin, Larry D; Fischer, Kenneth J

2006-08-01

332

The Effects of Surfactants on the Estimation of Bacterial Density in Petroleum Samples  

NASA Astrophysics Data System (ADS)

The effect of the surfactants polyoxyethylene monostearate (Tween 60), polyoxyethylene monooleate (Tween 80), cetyl trimethyl ammonium bromide (CTAB), and sodium dodecyl sulfate (SDS) on the estimation of bacterial density (sulfate-reducing bacteria [SRB] and general anaerobic bacteria [GAnB]) was examined in petroleum samples. Three different compositions of oil and water were selected to be representative of the real samples. The first one contained a high content of oil, the second one contained a medium content of oil, and the last one contained a low content of oil. The most probable number (MPN) was used to estimate the bacterial density. The results showed that the addition of surfactants did not improve the SRB quantification for the high or medium oil content in the petroleum samples. On other hand, Tween 60 and Tween 80 promoted a significant increase on the GAnB quantification at 0.01% or 0.03% m/v concentrations, respectively. CTAB increased SRB and GAnB estimation for the sample with a low oil content at 0.00005% and 0.0001% m/v, respectively.

Luna, Aderval Severino; da Costa, Antonio Carlos Augusto; Gonçalves, Márcia Monteiro Machado; de Almeida, Kelly Yaeko Miyashiro

333

Kernel density estimation-based real-time prediction for respiratory motion  

NASA Astrophysics Data System (ADS)

Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the standard deviation of the observed data as the error metric. Furthermore, we compared the proposed method with two benchmark methods: most recent sample and an adaptive linear filter. The kernel density estimation-based prediction results demonstrate universally significant improvement over the alternatives and are especially valuable for long lookahead time, when the alternative methods fail to produce useful predictions.

Ruan, Dan

2010-03-01

334

Kernel density estimation-based real-time prediction for respiratory motion.  

PubMed

Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the standard deviation of the observed data as the error metric. Furthermore, we compared the proposed method with two benchmark methods: most recent sample and an adaptive linear filter. The kernel density estimation-based prediction results demonstrate universally significant improvement over the alternatives and are especially valuable for long lookahead time, when the alternative methods fail to produce useful predictions. PMID:20134084

Ruan, Dan

2010-02-04

335

Estimating column density from ammonia (1,1) emission in star-forming regions  

NASA Astrophysics Data System (ADS)

We present a new, approximate method of calculating the column density of ammonia in mapping observations of the 23 GHz inversion lines. The temperature regime typically found in star-forming regions allows for the assumption of a slowly varying partition function for ammonia. It is therefore possible to determine the column density using only the (J=1, K=1) inversion transition rather than the typical combination of the (1,1) and (2,2) transitions, with additional uncertainties comparable to or less than typical observational error. The proposed method allows column density and mass estimates to be extended into areas of lower signal-to-noise ratio. We show examples of column-density maps around a number of cores in the W3 and Perseus star-forming regions made using this approximation, along with a comparison to the corresponding results obtained using the full two-transition approach. We suggest that this method is a useful tool in studying the distribution of mass around young stellar objects, particularly in the outskirts of the protostellar envelope where the (2,2) ammonia line is often undetectable on the short time-scales necessary for large-area mapping.

Morgan, L. K.; Moore, T. J. T.; Allsopp, J.; Eden, D. J.

2013-01-01

336

A Recursive Wavelet-based Strategy for Real-Time Cochlear Implant Speech Processing on PDA Platforms  

PubMed Central

This paper presents a wavelet-based speech coding strategy for cochlear implants. In addition, it describes the real-time implementation of this strategy on a PDA platform. Three wavelet packet decomposition tree structures are considered and their performance in terms of computational complexity, spectral leakage, fixed-point accuracy, and real-time processing are compared to other commonly used strategies in cochlear implants. A real-time mechanism is introduced for updating the wavelet coefficients recursively. It is shown that the proposed strategy achieves higher analysis rates than the existing strategies while being able to run in real-time on a PDA platform. In addition, it is shown that this strategy leads to a lower amount of spectral leakage. The PDA implementation is made interactive to allow users to easily manipulate the parameters involved and study their effects.

Gopalakrishna, Vanishree; Kehtarnavaz, Nasser; Loizou, Philipos C.

2011-01-01

337

A Wavelet-Based Algorithm for Delineation and Classification of Wave Patterns in Continuous Holter ECG Recordings  

PubMed Central

Quantitative analysis of the electrocardiogram (ECG) requires delineation and classification of the individual ECG wave patterns. We propose a wavelet-based waveform classifier that uses the fiducial points identified by a delineation algorithm. For validation of the algorithm, manually annotated ECG records from the QT database (Physionet) were used. ECG waveform classification accuracies were: 85.6% (P-wave), 89.7% (QRS complex), 92.8% (T-wave) and 76.9% (U-wave). The proposed classification method shows that it is possible to classify waveforms based on the points obtained during delineation. This approach can be used to automatically classify wave patterns in long-term ECG recordings such as 24-hour Holter recordings.

Johannesen, L; Grove, USL; S?rensen, JS; Schmidt, ML; Couderc, J-P; Graff, C

2011-01-01

338

Wavelet-based multiscale anisotropic diffusion in cone-beam CT breast imaging denoising for x-ray dose reduction  

NASA Astrophysics Data System (ADS)

The real-time flat panel detector-based cone beam CT breast imaging (FPD-CBCTBI) has attracted increasing attention for its merits of early detection of small breast cancerous tumors, 3-D diagnosis, and treatment planning with glandular dose levels not exceeding those of conventional film-screen mammography. In this research, our motivation is to further reduce the x-ray exposure level for the cone beam CT scan while retaining acceptable image quality for medical diagnosis by applying efficient denoising techniques. In this paper, the wavelet-based multiscale anisotropic diffusion algorithm is applied for cone beam CT breast imaging denoising. Experimental results demonstrate that the denoising algorithm is very efficient for cone bean CT breast imaging for noise reduction and edge preservation. The denoising results indicate that in clinical applications of the cone beam CT breast imaging, the patient"s radiation dose can be reduced by up to 60% while obtaining acceptable image quality for diagnosis.

Zhong, Junmei; Ning, Ruola; Conover, David L.

2004-05-01

339

Simple method to estimate MOS oxide-trap, interface-trap, and border-trap densities  

SciTech Connect

Recent work has shown that near-interfacial oxide traps that communicates with the underlaying Si (``border traps``) can play a significant role in determining MOS radiation response and long-term reliability. Thermally-stimulated-current 1/f noise, and frequency-dependent charge-pumping measurements have been used to estimate border-trap densities in MOS structures. These methods all require high-precision, low-noise measurements that are often difficult to perform and interpret. In this summary, we describe a new dual-transistor method to separate bulk-oxide-trap, interface-trap, and border-trap densities in irradiated MOS transistors that requires only standard threshold-voltage and high-frequency charge-pumping measurements.

Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R.

1993-09-01

340

Wavelet series method for reconstruction and spectral estimation of laser Doppler velocimetry data  

NASA Astrophysics Data System (ADS)

Many techniques have been developed in order to obtain spectral density function from randomly sampled data, such as the computation of a slotted autocovariance function. Nevertheless, one may be interested in obtaining more information from laser Doppler signals than a spectral content, using more or less complex computations that can be easily conducted with an evenly sampled signal. That is the reason why reconstructing an evenly sampled signal from the original LDV data is of interest. The ability of a wavelet-based technique to reconstruct the signal with respect to statistical properties of the original one is explored, and spectral content of the reconstructed signal is given and compared with estimated spectral density function obtained through classical slotting technique. Furthermore, LDV signals taken from a screeching jet are reconstructed in order to perform spectral and bispectral analysis, showing the ability of the technique in recovering accurate information's with only few LDV samples.

Jaunet, Vincent; Collin, Erwan; Bonnet, Jean-Paul

2012-01-01

341

New density estimation methods for charged particle beams with applications to microbunching instability  

NASA Astrophysics Data System (ADS)

In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzi?, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

Terzi?, Balša; Bassi, Gabriele

2011-07-01

342

A rapid, reliable method for uNK cell density estimation.  

PubMed

Interest in uterine NK cell density as a diagnostic or screening tool in reproductive disorders is growing. However, current methods of analysis are time consuming. In this study, 997 images from 100 women were analysed both by the currently used method combining manual and computer aided counting, and by using colour splitting combined with area measurement as an estimate of positively stained cells. Good correlation was found between both methods (r(2)=0.92) centred around the line of equality, with no systematic differences observed in Bland Altman plots. The area measurement was significantly faster and thus is a useful screening method. PMID:23414625

Drury, Josephine A; Tang, Ai-Wei; Turner, Mark A; Quenby, Siobhan

2013-02-12

343

Transverse energy scaling and energy density estimates from /sup 16/O- and /sup 32/S-induced reactions  

SciTech Connect

We discuss the dependence of transverse energy production on projectile mass, target mass, and on the impact parameter of the heavy ion reaction. The transverse energy is shown to scale with the number of participating nucleons. Various methods to estimate the attained energy density from the observed transverse energy are discussed. It is shown that the systematics of the energy density estimates suggest average of 2-3 GeV/fm/sup 3/ rather than the much higher values attained by assuming Landau-stopping initial conditions. Based on the observed scaling of the transverse energy, an initial energy density profile may be estimated. 11 refs., 4 figs.

Awes, T.C.; Albrecht, R.; Baktash, C.; Beckmann, P.; Berger, F.; Bock, R.; Claesson, G.; Clewing, G.; Dragon, L.; Eklund, A.

1989-01-01

344

Perceived 'healthiness' of foods can influence consumers' estimations of energy density and appropriate portion size.  

PubMed

OBJECTIVE:To compare portion size (PS) estimates, perceived energy density (ED) and anticipated consumption guilt (ACG) for healthier vs standard foods.METHODS:Three pairs of isoenergy dense (kJ per 100?g) foods-healthier vs standard cereals, drinks and coleslaws-were selected. For each food, subjects served an appropriate PS for themselves and estimated its ED. Subjects also rated their ACG about eating the food on a scale of 1 (not at all guilty) to 5 (very guilty).RESULTS:Subjects (n=186) estimated larger portions of the healthier coleslaw than that of the standard version, and perceived all healthier foods to be lower in ED than their standard alternatives, despite being isoenergy dense. Higher ACG was associated with the standard foods. Portion estimates were generally larger than recommendations and the ED of the foods was underestimated.CONCLUSIONS:The larger portions selected for the 'reduced fat' food in association with lower perceived ED and ACG suggests that such nutrition claims could be promoting inappropriate PS selection and consumption behaviour. Consumer education on appropriate portions is warranted to correct such misconceptions.International Journal of Obesity advance online publication, 4 June 2013; doi:10.1038/ijo.2013.69. PMID:23732657

Faulkner, G P; Pourshahidi, L K; Wallace, J M W; Kerr, M A; McCaffrey, T A; Livingstone, M B E

2013-05-07

345

Similarities between Line Fishing and Baited Stereo-Video Estimations of Length-Frequency: Novel Application of Kernel Density Estimates  

PubMed Central

Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar.

Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.

2012-01-01

346

Nonparametric estimation of long-tailed density functions and its application to the analysis of World Wide Web traffic  

Microsoft Academic Search

The study of WWW-traffic measurements has shown that different traffic characteristics can be modeled by long-tail distributed random variables (r.v.s). In this paper we discuss the nonparametric estimation of the probability density function of long-tailed distributions. Two nonparametric estimates, a Parzen–Rosenblatt kernel estimate and a histogram with variable bin width called polygram, are considered. The consistency of these estimates for

Natalia M. Markovitch; Udo R. Krieger

2000-01-01

347

Topside Ionosphere Plasma bubbles seen as He+ Density Depletions: Estimations and Comparisons  

NASA Astrophysics Data System (ADS)

He+ density depletions, considered as originating from equatorial plasma bubbles, were involved in this study. They are usually detected in the topside ionosphere (~1000 km) deeply inside the plasmasphere (L~1.3-3) [1-3]. a) Since there are some questions about the survival possibilities of the topside plasma bubbles, the characteristic times of the main processes, in which plasma bubbles are involved, were compared. It was suggested that the plasma bubbles are produced by Rayleigh-Taylor instability at the bottomside of ionosphere and transported up to the topside ionosphere. It was found that it takes about 3-4 hours for plasma bubbles to reach the topside ionosphere altitudes. It was revealed that ambipolar diffusion transport is the most fast (some minutes). The estimation of the Bohm (cross-field) diffusion time shows that topside plasma bubbles can exist up to 100 hours. It was concluded that there is enough time for the plasma bubbles to survive and to be detected (for example, in minor species of ion composition inside the bubble like He+) at the topside ionosphere altitudes. (b) It was revealed that the topside plasma bubbles can be easily detected as He+ density depletions during high and maximal solar activity. The convenient conditions for observations appear because the strong depleted in He+ density bubbles, reaching the topside ionosphere, most well contrast with the He+ density background layer very well developed in the topside ionosphere during high solar activity [4]. (c) He+ density depletions were considered in connection with equatorial F-region irregularities (EFI), equatorial F-spread (ESF) and equatorial plasma bubbles (EPB). Their longitudinal statistics, calculated for all seasons and both hemispheres (20-50 deg. INVLAT), were compared with EFI statistics taken from AE-E [5], OGO-6 [6], ROCSAT [7] observations. ESF, EPB statistics taken from [8, 9] based on ISS-b and Hinotori spacecraft data were also used for comparison. It was revealed that the main statistical maxima of the equatorial F-region irregularities are well enough reflected in the statistical plots of the He+ density depletions of the both hemispheres. The best conformity was obtained for equinoxes, the worst one was obtained for solstices, when the most dramatic insolation differences take place in the different hemispheres. Hence, it was validated once again that He+ density depletions may be considered as an indicator of topside plasma bubble presence or as fossil bubble signatures.

Sidorova, L.; Filippov, S.

2012-04-01

348

The subauroral electron density trough: Comparison between satellite observations and IRI-2007 model estimates  

NASA Astrophysics Data System (ADS)

We compare electron density predictions of the International Reference Ionosphere (IRI-2007) model with in situ measurements of the satellites CHAMP and GRACE for the years from 2005 to 2010 over the subauroral regions. The electron density between 58° and 68° Mlat are considered. The trough region Ne peaks during local summers and attain the valley during local winter. Around -100°E and 60°E, two larger electron density sectors features can be seen in both hemispheres during all three seasons, which attributed to the electron extending from middle latitude to trough region. From 2005 to the beginning of 2010, the model overestimates the trough region Ne by 20% on average and the decrease of Ne in this region can also be seen during the last solar minimum. In the southern hemisphere, the model prediction shows quite well consistence with the observation during all three seasons while the huge difference between observations and model estimation implies that the IRI-2007 model needs significant improvement to predict better the trough region in northern hemisphere.

Xiong, C.; Lühr, H.; Ma, S. Y.

2013-02-01

349

Integration of Self-Organizing Map (SOM) and Kernel Density Estimation (KDE) for network intrusion detection  

NASA Astrophysics Data System (ADS)

This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

2009-09-01

350

How Does Spatial Study Design Influence Density Estimates from Spatial Capture-Recapture Models?  

PubMed Central

When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR) models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species.

Sollmann, Rahel; Gardner, Beth; Belant, Jerrold L.

2012-01-01

351

How does spatial study design influence density estimates from spatial capture-recapture models?  

PubMed

When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR) models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km(2). Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species. PMID:22539949

Sollmann, Rahel; Gardner, Beth; Belant, Jerrold L

2012-04-23

352

Estimating the Effect of Satellite Orbital Error Using Wavelet-Based Robust Regression Applied to InSAR Deformation Data  

Microsoft Academic Search

Interferometric synthetic aperture radar data are of- ten obtained on the basis of repeated satellite acquisitions. Errors in the satellite orbit determination, however, propagate to the data analysis and may even entirely obscure the interpretation. Many approaches have been developed to correct the effect of orbital error, which sometimes may even distort the signal. Phase contributions due to other sources,

Manoochehr Shirzaei; Thomas R. Walter

2011-01-01

353

Independent component analysis of high-density electromyography in muscle force estimation.  

PubMed

Accurate force prediction from surface electromyography (EMG) forms an important methodological challenge in biomechanics and kinesiology. In a previous study (Staudenmann et al., 2006), we illustrated force estimates based on analyses lent from multivariate statistics. In particular, we showed the advantages of principal component analysis (PCA) on monopolar high-density EMG (HD-EMG) over conventional electrode configurations. In the present study, we further improve force estimates by exploiting the correlation structure of the HD-EMG via independent component analysis (ICA). HD-EMG from the triceps brachii muscle and the extension force of the elbow were measured in 11 subjects. The root mean square difference (RMSD) and correlation coefficients between predicted and measured force were determined. Relative to using the monopolar EMG data, PCA yielded a 40% reduction in RMSD. ICA yielded a significant further reduction of up to 13% RMSD. Since ICA improved the PCA-based estimates, the independent structure of EMG signals appears to contain relevant additional information for the prediction of muscle force from surface HD-EMG. PMID:17405383

Staudenmann, Didier; Daffertshofer, Andreas; Kingma, Idsart; Stegeman, Dick F; van Dieën, Jaap H

2007-04-01

354

Density  

NSDL National Science Digital Library

Density is a property of materials included in the National Science Education Standards Physical Science Content Standard B. It is a property by which mixtures can be separated but has much more profound applications outside the classroom such as rock formation, severe weather and living systems. But none of these concepts are fully comprehensible without a fundamental conceptual understanding of density.The resources here provide examples designed to help you facilitate student acquisition of a conceptual understanding of density.

University, Staff A.

2008-03-07

355

Small scale spatial variability of snow density and depth over complex alpine terrain: Implications for estimating snow water equivalent  

NASA Astrophysics Data System (ADS)

This study analyzes spatial variability of snow depth and density from measurements made in February and April of 2010 and 2011 in three 1-2 km2 areas within a valley of the central Spanish Pyrenees. Snow density was correlated with snow depth and different terrain characteristics. Regression models were used to predict the spatial variability of snow density, and to assess how the error in computed densities might influence estimates of snow water equivalent (SWE).The variability in snow depth was much greater than that of snow density. The average snow density was much greater in April than in February. The correlations between snow depth and density were generally statistically significant but typically not very high, and their magnitudes and signs were highly variable among sites and surveys. The correlation with other topographic variables showed the same variability in magnitude and sign, and consequently the resulting regression models were very inconsistent, and in general explained little of the variance. Antecedent climatic and snow conditions prior to each survey help highlight the main causes of the contrasting relation shown between snow depth, density and terrain. As a consequence of the moderate spatial variability of snow density relative to snow depth, the absolute error in the SWE estimated from computed densities using the regression models was generally less than 15%. The error was similar to that obtained by relating snow density measurements directly to adjacent snow depths.

López-Moreno, J. I.; Fassnacht, S. R.; Heath, J. T.; Musselman, K. N.; Revuelto, J.; Latron, J.; Morán-Tejeda, E.; Jonas, T.

2013-05-01

356

Estimation of body density based on hydrostatic weighing without head submersion in young Japanese adults.  

PubMed

This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required. PMID:16608771

Demura, S; Sato, S; Kitabayashi, T

2006-06-01

357

Extraordinarily low density of hepatitis C virus estimated by sucrose density gradient centrifugation and the polymerase chain reaction  

Microsoft Academic Search

The genomic RNA of hepatitis C virus (HCV) in the plasma of volunteer blood donors was detected by using the polymerase chain reaction in a fraction of density 1-08 g\\/ml from sucrose density gradient equilibrium centrifugation. When the fraction was treated with the detergent NP40 and recentrifuged in sucrose, the HCV RNA banded at 1.25g\\/ml. Assuming that NP40 removed a

Hideaki Miyamoto; Hiroaki Okamoto; Koei Sato; Takeshi Tanaka; Shunji Mishiro

1992-01-01

358

Density Estimation and Bump-Hunting by the Penalized Likelihood Method Exemplified by Scattering and Meteorite Data  

Microsoft Academic Search

The (maximum) penalized-likelihood method of probability density estimation and bump-hunting is improved and exemplified by applications to scattering and chondrite data. We show how the hyperparameter in the method can be satisfactorily estimated by using statistics of goodness of fit. A Fourier expansion is found to be usually more expeditious than a Hermite expansion but a compromise is useful. The

I. J. Good; R. A. Gaskins

1980-01-01

359

A new proof of strong consistency of kernel estimation of density function and mode under random censorship  

Microsoft Academic Search

In this paper, we establish a new proof of uniform consistency of kernel estimator of density function when we observe a random right censored model. This proof uses an exponential inequality established by Wang (2000). As a consequence, we obtain the almost sure convergence of the kernel estimator of the mode.

Ali Gannoun; Jérôme Saracco

2002-01-01

360

Wavelet-based multifractal analysis of field scale variability in soil water retention  

Microsoft Academic Search

Better understanding of spatial variability of soil hydraulic parameters and their relationships to other soil properties is essential to scale-up measured hydraulic parameters and to improve the predictive capacity of pedotransfer functions. The objective of this study was to characterize scaling properties and the persistency of water retention parameters and soil physical properties. Soil texture, bulk density, organic carbon content,

Takele B. Zeleke; Bing C. Si

2007-01-01

361

A Wavelet-based Optimal Filtering Method for Adaptive Detection: Application to Metallic Magnetic Calorimeters  

NASA Astrophysics Data System (ADS)

Optimal filtering allows the maximization of signal-over-noise ratio for the improvement of both energy threshold and resolution. Nevertheless, its effective efficiency depends on the estimation of signal and noise spectra. In practice, these are often estimated by averaging over a set of carefully chosen data. In case of time-varying noise, adaptive non-linear algorithms can be used if the shape of the signal is known. However, their convergence is not guaranteed, especially with 1/f-type noise. In this paper, a new method is presented for adaptive noise whitening and template signal estimation. First, the noise is continuously characterized in the wavelet domain, where the signal is decomposed over a set of scales, corresponding to band-pass filters. Both time resolution and decorrelation properties of the wavelet transform allow an accurate and robust estimation of the noise structure, even if pulses or correlated noise are present. The whitening step then amounts to a normalization of each scale by the estimated noise variance. A matched filter is then applied on the whitened signal. The required signal template is constructed from a single event, denoised by a filtering technique called wavelet thresholding. As an example, application to metallic magnetic calorimeter data is presented. The method reaches the precision of conventional optimal filtering, further allowing noise monitoring, adaptive threshold and improving the energy resolution of up to 8% in some cases.

Censier, B.; Rodrigues, M.; Loidl, M.

2009-12-01

362

Density estimates of Panamanian owl monkeys (Aotus zonalis) in three habitat types.  

PubMed

The resolution of the ambiguity surrounding the taxonomy of Aotus means data on newly classified species are urgently needed for conservation efforts. We conducted a study on the Panamanian owl monkey (Aotus zonalis) between May and July 2008 at three localities in Chagres National Park, located east of the Panama Canal, using the line transect method to quantify abundance and distribution. Vegetation surveys were also conducted to provide a baseline quantification of the three habitat types. We observed 33 individuals within 16 groups in two out of the three sites. Population density was highest in Campo Chagres with 19.7 individuals/km(2) and intermediate densities of 14.3 individuals/km(2) were observed at Cerro Azul. In la Llana A. zonalis was not found to be present. The presence of A. zonalis in Chagres National Park, albeit at seemingly low abundance, is encouraging. A longer-term study will be necessary to validate the further abundance estimates gained in this pilot study in order to make conservation policy decisions. PMID:19852005

Svensson, Magdalena S; Samudio, Rafael; Bearder, Simon K; Nekaris, K Anne-Isola

2010-02-01

363

An empirical method for estimating probability density functions of gridded daily minimum and maximum temperature  

NASA Astrophysics Data System (ADS)

The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.

Lussana, C.

2013-04-01

364

Robust estimation of mammographic breast density: a patient-based approach  

NASA Astrophysics Data System (ADS)

Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).

Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas

2012-02-01

365

Density estimation in aerial images of large crowds for automatic people counting  

NASA Astrophysics Data System (ADS)

Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

Herrmann, Christian; Metzler, Juergen

2013-05-01

366

Estimating oxide-trap, interface-trap, and border-trap charge densities in metal-oxide-semiconductor transistors  

SciTech Connect

A simple method is described that combines conventional threshold-voltage and charge-pumping measurements on [ital n]- and [ital p]-channel metal-oxide-semiconductor (MOS) transistors to estimate radiation-induced oxide-, interface-, and border-trap charge densities. In some devices, densities of border traps (near-interfacial oxide traps that exchange charge with the underlying Si) approach or exceed the density of interface traps, emphasizing the need to distinguish border-trap contributions to MOS radiation response and long-term reliability from interface-trap contributions. Estimates of border-trap charge densities obtained via this new dual-transistor technique agree well with trap densities inferred from 1/[ital f] noise measurements for transistors with varying channel length.

Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R. (Sandia National Laboratories, Department 1332, Albuquerque, New Mexico 87185-1083 (United States))

1994-04-11

367

The minimum description length principle for probability density estimation by regular histograms  

NASA Astrophysics Data System (ADS)

The minimum description length principle is a general methodology for statistical modeling and inference that selects the best explanation for observed data as the one allowing the shortest description of them. Application of this principle to the important task of probability density estimation by histograms was previously proposed. We review this approach and provide additional illustrative examples and an application to real-world data, with a presentation emphasizing intuition and concrete arguments. We also consider alternative ways of measuring the description lengths, that can be found to be more suited in this context. We explicitly exhibit, analyze and compare, the complete forms of the description lengths with formulas involving the information entropy and redundancy of the data, and not given elsewhere. Histogram estimation as performed here naturally extends to multidimensional data, and offers for them flexible and optimal subquantization schemes. The framework can be very useful for modeling and reduction of complexity of observed data, based on a general principle from statistical information theory, and placed within a unifying informational perspective.

Chapeau-Blondeau, François; Rousseau, David

2009-09-01

368

Wavelet-Based Feature Extraction for Retrieval of Photosynthetic Pigment Concentrations from Hyperspectral Reflectance  

Microsoft Academic Search

With recent advancement in precision pasture, the need for quickly assessing photosynthetic pigment content of pasture has become apparent. Hyper-spectrum technology can provide an non-destructively methods for evaluate the vegetation photosynthetic pigment content. This paper is devoted to illustrating the potential for wavelet analysis of hyperspectral reflectance signals in the field of estimating photosynthetic pigments and evaluating grass quality. Wavelet

Yu Rong Qian; Feng Yang; Jian Long Li

2009-01-01

369

An application of the learning theory to wavelet based signal denoising  

Microsoft Academic Search

In this paper the statistical learning theory is applied to signal denoising using wavelets. The methodology is based on the estimation of the functional relationship between the Vapnik-Chervonenkis (VC) dimension and approximation complexity. Experimental results confirm the basic assumptions.

M. Stankovic; S. Stankovic

2004-01-01

370

Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN  

NASA Astrophysics Data System (ADS)

This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain.

Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

2004-07-01

371

Automatic check method of vehicle digital dashboard based on wavelet-based multi-scale GVF snake  

NASA Astrophysics Data System (ADS)

The high accuracy of the vehicle digital dashboard makes it difficult to check its error in real time. On taking the advantage of is production condition, the digital image processing method can be used to check the dashboard's precision automatically. The image edge detection method is the key of our dashboard check method. The snake model has been extensively used today. The GVF snake model overcomes the traditional snake model's shortcoming, it has a large capture range and is able to move into boundary concavities. But it still needs large amount of computation and is easily to be disturbed by noise. The wavelet-based multi-scale GVF snake took the advantage of the wavelet transform and GVF model. In the lower resolution, there were less wavelet coefficients and the GVF snake was easy to deform to the contour without much computation and was less interfered by noise. In higher resolution, with taking advantage of the initial position of the foregoing resolution, much more computation would be saved. Experiments show this method can detect the position of the pointer automatically and exactly.

Zhang, Hong-wei; Zhang, Jian-wei; Cao, Jian; Wang, Wu-lin; Gong, Jin-feng; Wang, Xu

2007-12-01

372

Enhancement of tropical land cover mapping with wavelet-based fusion and unsupervised clustering of SAR and Landsat image data  

NASA Astrophysics Data System (ADS)

The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument. In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi- sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet- based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

Le Moigne, Jacqueline; Laporte, Nadine; Netanyahu, Nathan S.

2002-01-01

373

sEMG wavelet-based indices predicts muscle power loss during dynamic contractions  

Microsoft Academic Search

The purpose of this study was to investigate the sensitivity of new surface electromyography (sEMG) indices based on the discrete wavelet transform to estimate acute exercise-induced changes on muscle power output during a dynamic fatiguing protocol. Fifteen trained subjects performed five sets consisting of 10 leg press, with 2min rest between sets. sEMG was recorded from vastus medialis (VM) muscle.

M. González-Izal; I. Rodríguez-Carreño; A. Malanda; F. Mallor-Giménez; I. Navarro-Amézqueta; E. M. Gorostiaga; M. Izquierdo

2010-01-01

374

X-Ray Methods to Estimate Breast Density Content in Breast Tissue  

NASA Astrophysics Data System (ADS)

This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ? fibrous, lucite ? cancer and polyet ? fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating techniques and one photon counting technique. Contrast, signal to noise ratio (SNR) and pixel values between different regions of interest were analyzed. The two basis function method and two of the energy integrating methods (calibration, beam hardening correction) gave the highest and more linear curves for contrast and SNR.

Maraghechi, Borna

375

The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions  

USGS Publications Warehouse

The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

2002-01-01

376

Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation  

PubMed Central

We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of ?1.625?dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters.

Adib, Mani; Cretu, Edmond

2013-01-01

377

Estimation of Boron Intake and its Relation with Bone Mineral Density in Free-Living Korean Female Subjects  

Microsoft Academic Search

In this study, the status of boron intake was evaluated and its relation with bone mineral density was examined among free-living\\u000a female subjects in Korea. Boron intake was estimated through the use of the database of boron content in frequently consumed\\u000a foods by Korean people as well as measuring bone mineral density, taking anthropometric measurements, and surveying dietary\\u000a intake of

Mi-Hyun Kim; Yun-Jung Bae; Yoon-Shin Lee; Mi-Kyeong Choi

2008-01-01

378

Towards a simple generic model for upland rice root length density estimation from root intersections on soil profile  

Microsoft Academic Search

Root length density (RLD), a key factor for water and nutrient uptake, varies as a function of space and time, and is laborious\\u000a to measure by root washing. In order to estimate RLD from root intersection density (RID), taking root orientation into account,\\u000a RID was determined on three perpendicular soil planes of cubic samples and RLD was measured for the

J. Dusserre; A. Audebert; A. Radanielson; J. L. Chopart

2009-01-01

379

Two-component wind fields from scanning aerosol lidar and motion estimation algorithms  

NASA Astrophysics Data System (ADS)

We report on the implementation and testing of a new wavelet-based motion estimation algorithm to estimate horizontal vector wind fields in real-time from horizontally-scanning elastic backscatter lidar data, and new experimental results from field work conducted in Chico, California, during the summer of 2013. We also highlight some limitations of a traditional cross-correlation method and compare the results of the wavelet-based method with those from the cross-correlation method and wind measurements from a Doppler lidar.

Mayor, Shane D.; Dérian, Pierre; Mauzey, Christopher F.; Hamada, Masaki

2013-09-01

380

A Wavelet-based Seismogram Inversion Algorithm for the In Situ Characterization of Nonlinear Soil Behavior  

NASA Astrophysics Data System (ADS)

We present a full waveform inversion algorithm of downhole array seismogram recordings that can be used to estimate the inelastic soil behavior in situ during earthquake ground motion. For this purpose, we first develop a new hysteretic scheme that improves upon existing nonlinear site response models by allowing adjustment of the width and length of the hysteresis loop for a relatively small number of soil parameters. The constitutive law is formulated to approximate the response of saturated cohesive materials, and does not account for volumetric changes due to shear leading to pore pressure development and potential liquefaction. We implement the soil model in the forward operator of the inversion, and evaluate the constitutive parameters that maximize the cross-correlation between site response predictions and observations on ground surface. The objective function is defined in the wavelet domain, which allows equal weight to be assigned across all frequency bands of the non-stationary signal. We evaluate the convergence rate and robustness of the proposed scheme for noise-free and noise-contaminated data, and illustrate good performance of the inversion for signal-to-noise ratios as low as 3. We finally employ the proposed scheme to downhole array data, and show that results compare very well with published data on generic soil conditions and previous geotechnical investigation studies at the array site. By assuming a realistic hysteretic model and estimating the constitutive soil parameters, the proposed inversion accounts for the instantaneous adjustment of soil response to the level and strain and load path during transient loading, and allows results to be used in predictions of nonlinear site effects during future events.

Assimaki, D.; Li, W.; Kalos, A.

2011-10-01

381

A Wavelet-based Seismogram Inversion Algorithm for the In Situ Characterization of Nonlinear Soil Behavior  

NASA Astrophysics Data System (ADS)

We present a full waveform inversion algorithm of downhole array seismogram recordings that can be used to estimate the inelastic soil behavior in situ during earthquake ground motion. For this purpose, we first develop a new hysteretic scheme that improves upon existing nonlinear site response models by allowing adjustment of the width and length of the hysteresis loop for a relatively small number of soil parameters. The constitutive law is formulated to approximate the response of saturated cohesive materials, and does not account for volumetric changes due to shear leading to pore pressure development and potential liquefaction. We implement the soil model in the forward operator of the inversion, and evaluate the constitutive parameters that maximize the cross-correlation between site response predictions and observations on ground surface. The objective function is defined in the wavelet domain, which allows equal weight to be assigned across all frequency bands of the non-stationary signal. We evaluate the convergence rate and robustness of the proposed scheme for noise-free and noise-contaminated data, and illustrate good performance of the inversion for signal-to-noise ratios as low as 3. We finally employ the proposed scheme to downhole array data, and show that results compare very well with published data on generic soil conditions and previous geotechnical investigation studies at the array site. By assuming a realistic hysteretic model and estimating the constitutive soil parameters, the proposed inversion accounts for the instantaneous adjustment of soil response to the level and strain and load path during transient loading, and allows results to be used in predictions of nonlinear site effects during future events.

Assimaki, D.; Li, W.; Kalos, A.

2010-11-01

382

Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation  

SciTech Connect

Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

2011-05-15

383

Can Hip Fracture Prediction in Women be Estimated beyond Bone Mineral Density Measurement Alone?  

PubMed Central

The etiology of hip fractures is multifactorial and includes bone and fall-related factors. Low bone mineral density (BMD) and BMD-related and BMD-independent geometric components of bone strength, evaluated by hip strength analysis (HSA) and finite element analysis analyses on dual-energy X-ray absorptiometry (DXA) images, and ultrasound parameters are related to the presence and incidence of hip fracture. In addition, clinical risk factors contribute to the risk of hip fractures, independent of BMD. They are included in the fracture risk assessment tool (FRAX) case finding algorithm to estimate in the individual patient the 10-year risk of hip fracture, with and without BMD. Fall risks are not included in FRAX, but are included in other case finding tools, such as the Garvan algorithm, to predict the 5- and 10-year hip fracture risk. Hormones, cytokines, growth factors, markers of bone resorption and genetic background have been related to hip fracture risk. Vitamin D deficiency is endemic worldwide and low serum levels of 25-hydroxyvitamin D [25(OH)D] predict hip fracture risk. In the context of hip fracture prevention calculation of absolute fracture risk using clinical risks, BMD, bone geometry and fall-related risks is feasible, but needs further refinement by integrating bone and fall-related risk factors into a single case finding algorithm for clinical use.

Geusens, Piet; van Geel, Tineke; van den Bergh, Joop

2010-01-01

384

Estimating the cell density and invasive radius of three-dimensional glioblastoma tumor spheroids grown in vitro  

NASA Astrophysics Data System (ADS)

To gain insight into brain tumor invasion, experiments are conducted on multicellular tumor spheroids grown in collagen gel. Typically, a radius of invasion is reported, which is obtained by human measurement. We present a simple, heuristic algorithm for automated invasive radii estimation (AIRE) that uses local fluctuations of the image intensity. We then derive an analytical expression relating the image graininess to the cell density for a model imaging system. The result agrees with the experiment up to a multiplicative constant and thus describes a novel method for estimating the cell density from bright-field images.

Stein, Andrew M.; Nowicki, Michal O.; Demuth, Tim; Berens, Michael E.; Lawler, Sean E.; Chiocca, E. Antonio; Sander, Leonard M.

2007-08-01

385

Estimation of intraepidermal fiber density by the detection rate of nociceptive laser stimuli in normal and pathological conditions.  

PubMed

The variability of warm and heat pain sensitivity between body regions is usually ascribed to differences in intraepidermal nerve fiber (IENF) density. However, although crucial to assess the function of the thermo-nociceptive system, especially in the context of small fiber neuropathies, the relationship between psychophysical performance and IENF density is poorly understood. Here, we examine the hypothesis according to which the nociceptive system must receive a critical amount of afferent information to generate a conscious percept and/or a behavioral response. The amount of nociceptive information is defined by the stimulus, but also by the state of the nervous system encoding, transmitting and processing the afferent input. Furthermore, this amount may be expected to depend on the number of activated IENF, itself dependent on the size of the stimulated surface area as well as the density of IENF. By characterizing the relationship between psychophysical responses to nociceptive stimuli, size of the stimulated surface area and IENF density estimated using skin biopsies in healthy subjects as well as experimental and pathological conditions of reduced IENF density, we were able to estimate the number of nociceptive afferents required to elicit a conscious percept. Convergent results were obtained across the different experiments, indicating that the detection rate to brief small-diameter CO(2) laser pulses could be used to estimate IENF density and, hence, to diagnose and quantify denervation in small fiber neuropathies. PMID:23040699

Mouraux, A; Ragé, M; Bragard, D; Plaghki, L

2012-06-22

386

Estimation of tiger densities in the tropical dry forests of Panna, Central India, using photographic capture-recapture sampling  

USGS Publications Warehouse

Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

Karanth, K. U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.

2004-01-01

387

Wavelet-based multifractal analysis of field scale variability in soil water retention  

NASA Astrophysics Data System (ADS)

Better understanding of spatial variability of soil hydraulic parameters and their relationships to other soil properties is essential to scale-up measured hydraulic parameters and to improve the predictive capacity of pedotransfer functions. The objective of this study was to characterize scaling properties and the persistency of water retention parameters and soil physical properties. Soil texture, bulk density, organic carbon content, and the parameters of the van Genuchten water retention function were determined on 128 soil cores from a 384-m transect with a sandy loam soil, located at Smeaton, SK, Canada. The wavelet transform modulus maxima, or WTMM, technique was used in the multifractal analysis. Results indicate that the fitted water retention parameters had higher small-scale variability and lower persistency than the measured soil physical properties. Of the three distinct scaling ranges identified, the middle region (8-128 m) had a multifractal-type scaling. The generalized Hurst exponent indicated that the measured soil properties were more persistent than the fitted soil hydraulic parameters. The relationships observed here imply that soil physical properties are better predictors of water retention values at larger spatial scales than at smaller scales.

Zeleke, Takele B.; Si, Bing C.

2007-07-01

388

Wavelet-based data and solution compression for efficient image reconstruction in fluorescence diffuse optical tomography  

NASA Astrophysics Data System (ADS)

Current fluorescence diffuse optical tomography (fDOT) systems can provide large data sets and, in addition, the unknown parameters to be estimated are so numerous that the sensitivity matrix is too large to store. Alternatively, iterative methods can be used, but they can be extremely slow at converging when dealing with large matrices. A few approaches suitable for the reconstruction of images from very large data sets have been developed. However, they either require explicit construction of the sensitivity matrix, suffer from slow computation times, or can only be applied to restricted geometries. We introduce a method for fast reconstruction in fDOT with large data and solution spaces, which preserves the resolution of the forward operator whilst compressing its representation. The method does not require construction of the full matrix, and thus allows storage and direct inversion of the explicitly constructed compressed system matrix. The method is tested using simulated and experimental data. Results show that the fDOT image reconstruction problem can be effectively compressed without significant loss of information and with the added advantage of reducing image noise.

Correia, Teresa; Rudge, Timothy; Koch, Maximilian; Ntziachristos, Vasilis; Arridge, Simon

2013-08-01

389

sEMG wavelet-based indices predicts muscle power loss during dynamic contractions.  

PubMed

The purpose of this study was to investigate the sensitivity of new surface electromyography (sEMG) indices based on the discrete wavelet transform to estimate acute exercise-induced changes on muscle power output during a dynamic fatiguing protocol. Fifteen trained subjects performed five sets consisting of 10 leg press, with 2 min rest between sets. sEMG was recorded from vastus medialis (VM) muscle. Several surface electromyographic parameters were computed. These were: mean rectified voltage (MRV), median spectral frequency (F(med)), Dimitrov spectral index of muscle fatigue (FI(nsm5)), as well as five other parameters obtained from the stationary wavelet transform (SWT) as ratios between different scales. The new wavelet indices showed better accuracy to map changes in muscle power output during the fatiguing protocol. Moreover, the new wavelet indices as a single parameter predictor accounted for 46.6% of the performance variance of changes in muscle power and the log-FI(nsm5) and MRV as a two-factor combination predictor accounted for 49.8%. On the other hand, the new wavelet indices proposed, showed the highest robustness in presence of additive white Gaussian noise for different signal to noise ratios (SNRs). The sEMG wavelet indices proposed may be a useful tool to map changes in muscle power output during dynamic high-loading fatiguing task. PMID:20579906

González-Izal, M; Rodríguez-Carreño, I; Malanda, A; Mallor-Giménez, F; Navarro-Amézqueta, I; Gorostiaga, E M; Izquierdo, M

2010-06-25

390

Multiscale Systematic Error Correction via Wavelet-Based Band Splitting and Bayesian Error Modeling in Kepler Light Curves  

NASA Astrophysics Data System (ADS)

Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

2012-05-01

391

A Method to Correlate the Upper Air Density with Surface Density and Estimate the Ballistic Density for Air or Surface Launched Missiles.  

National Technical Information Service (NTIS)

Atmospheric data collected twice daily from five ocean stations over a period of 7 years were used in a study to correlate ballistic densities at various levels. The study indicates that a good correlation does exist and that tables, nomograms, or data in...

L. J. McAnelly

1982-01-01

392

MEASUREMENT OF OAK TREE DENSITY WITH LANDSAT TM DATA FOR ESTIMATING BIOGENIC ISOPRENE EMISSIONS IN TENNESSEE, USA: JOURNAL ARTICLE  

EPA Science Inventory

JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...

393

Development and Validation of a Fixed-Precision Sampling Plan for Estimating Striped Cucumber Beetle (Coleoptera: Chrysomelidae) Density in Cucurbits  

Microsoft Academic Search

Striped cucumber beetle, Acalymma vittatum (F.), has been identified as one of the most damaging insect pests of vegetables in Minnesota. In an effort to develop practical methods for estimating adult beetle density, beetles were sampled in cucurbit fields throughout central and southern Minnesota during 1994-1995. A. vittatum samples were collected in several cucurbit crops including: cucumber, pumpkin, and squash.

Eric C. Burkness; William D. Hutchison

394

Settlement location and population density estimation in rugged terrain using information derived from Landsat ETM and SRTM data  

Microsoft Academic Search

It is useful to have a disaggregated population database at uniform grid units in disaster situations. This study presents a method for settlement location probability and population density estimations at a 90 m resolution for northern Iraq using the Shuttle Radar Topographic Mission (SRTM) digital terrain model and Landsat Enhanced Thematic Mapper satellite imagery. A spatial model each for calculating the

Sarah Mubareka; Daniele Ehrlich; Ferdinand Bonn; Francois Kayitakire

2008-01-01

395

Global Crust-Mantle Density Contrast Estimated from EGM2008, DTM2008, CRUST2.0, and ICE-5G  

NASA Astrophysics Data System (ADS)

We compute globally the consolidated crust-stripped gravity disturbances/anomalies. These refined gravity field quantities are obtained from the EGM2008 gravity data after applying the topographic and crust density contrasts stripping corrections computed using the global topography/bathymetry model DTM2006.0, the global continental ice-thickness data ICE-5G, and the global crustal model CRUST2.0. All crust components density contrasts are defined relative to the reference crustal density of 2,670 kg/m3. We demonstrate that the consolidated crust-stripped gravity data have the strongest correlation with the crustal thickness. Therefore, they are the most suitable gravity data type for the recovery of the Moho density interface by means of the gravimetric modelling or inversion. The consolidated crust-stripped gravity data and the CRUST2.0 crust-thickness data are used to estimate the global average value of the crust-mantle density contrast. This is done by minimising the correlation between these refined gravity and crust-thickness data by adding the crust-mantle density contrast to the original reference crustal density of 2,670 kg/m3. The estimated values of 485 kg/m3 (for the refined gravity disturbances) and 481 kg/m3 (for the refined gravity anomalies) very closely agree with the value of the crust-mantle density contrast of 480 kg/m3, which is adopted in the definition of the Preliminary Reference Earth Model (PREM). This agreement is more likely due to the fact that our results of the gravimetric forward modelling are significantly constrained by the CRUST2.0 model density structure and crust-thickness data derived purely based on methods of seismic refraction.

Tenzer, Robert; Hamayun; Novák, Pavel; Gladkikh, Vladislav; Vajda, Peter

2012-09-01

396

Kinetic temperature and density of the interstellar medium estimated from molecular line intensities  

SciTech Connect

A simple graphical method is proposed for determining the kinetic temperature and density of interstellar gas directly from intensity measurements of the rotational lines of molecules having an optical depth tau<1. To illustrate the method, maps are prepared for the density distribution of two diffuse clouds, in Orion A and M17.

Everskaya, I.L.; Khersonskii, V.K.; Varshalovich, D.A.

1979-01-01

397

Interference by pigment in the estimation of microalgal biomass concentration by optical density  

Microsoft Academic Search

Optical density is used as a convenient indirect measurement of biomass concentration in microbial cell suspensions. Absorbance of light by a suspension can be related directly to cell density using a suitable standard curve. However, inaccuracies can be introduced when the pigment content of the cells changes. Under the culture conditions used, pigment content of the microalga Chlorella vulgaris varied

Melinda J. Griffiths; Clive Garcin; Robert P. van Hille; Susan T. L. Harrison

2011-01-01

398

Signal to noise ratio scaling and density limit estimates in longitudinal magnetic recording  

Microsoft Academic Search

A simplified general expression is given for SNR for digital magnetic recording for transition noise dominant systems. High density media are assumed in which the transition parameter scales with the in-plane grain diameter. At a fixed normalized code density, the SNR varies as the square of the bit spacing times the read track width divided by the grain diameter cubed.

H. N. Bertram; H. Zhou; R. Gustafson

1998-01-01

399

EO data supported population density estimation at fine resolution - test case rural Zimbabwe  

Microsoft Academic Search

The research carried out aimed at mapping Zimbabwean population density at sub-national scale. The study was conducted on a 185 x 185km area at a grid cell size of 150m. The surface modelling of population density was implemented by integrating 4 main variables: land use, settlements, road network, and slopes. During the modelling procedure, pixel weighting values were allocated according

Stefan Schneiderbauer; Daniele Ehrlich

400

Estimation of bone mineral density from the digital image of the calcanium bone  

Microsoft Academic Search

Osteoporosis is more commonly seen in routine clinical practices, and its affects post-menopausal women, and elderly of both sexes. In India, it is more prevalent due to the mal-nutritional status, vitamin D deficiency, and lack of physical activity of majority of the general population. Bone strength is determined by its quality and its density. Bone mineral density (BMD) can be

C Aroba Sahaya Ligesh; N Shanker; A Vijay; M Anburajan; C C Glueer

2011-01-01

401

Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection  

PubMed Central

Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG) more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1) the area between QRS offset and T-peak points, 2) the normalized and signed sum from QRS offset to effective zero voltage point, and 3) the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE) and support vector machine (SVM) methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical values of the parameters to be supplied in advance. In the case of the SVM classifier, one has to select a single parameter.

2012-01-01

402

A simple reproducible and time saving method of semi-automatic dendrite spine density estimation compared to manual spine counting.  

PubMed

Estimation of spine number and spine density by manual counting under the assumption that all dendrite protrusions equal spines are often used in studies on neuroplasticity occurring during health, brain diseases, and different experimental paradigms. Manual spine counting is, however, time consuming and biased by inter-observer variation. We present accordingly a quick, reproducible and simple non-stereological semi-automatic spine density estimation method based on the irregularity of the dendrite surface. Using the freeware ImageJ program, microphotographs of Golgi impregnated hippocampal dendrites derived from a previously performed study on the impact of chronic restrained stress were binarized, skeletonized, and the skeleton endings assumed to represent spine positions were counted and the spine densities calculated. The results based on 754 dendrite fragments were compared to manual spine counting of the same dendrite fragments using the Bland-Altman method. The results from both methods were correlated (r=0.79, p<0.0001), The semi-automatic counting method gave a statistically higher (approx. 4%) spine density number, but both counting methods showed similar significant differences between the groups in the CA1 area, and no differences between the groups in the CA3 area. In conclusion, the presented semi-automatic spine density estimation method yields consistently a higher spine density number than manual counting resulting in similar significance between groups. The proposed method may therefore be a reproducible time saving and useful non-stereological approach to spine counting in neuroplasticity studies requiring analysis of hundreds of dendrites. PMID:22595026

Orlowski, Dariusz; Bjarkam, Carsten R

2012-05-15

403

Wavelet Based Page Segmentation  

Microsoft Academic Search

Abstract: The process of page segmentation produces adescription of the spatial extent and position ofvarious components on the document page. In thispaper, we present an approach for segmentation of ageneral document page image using wavelets. Thismethod uses orthonormal wavelet decomposition toextract the attributes of the document spread overdifferent scales. We have devised a scheme for theparameterisation of the font-size of

P. Gupta; N. Vohra; S. Chaudhury; S. Joshi

2000-01-01

404

Estimating the density of honeybee colonies across their natural range to fill the gap in pollinator decline censuses.  

PubMed

Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation. PMID:19775273

Jaffé, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Rúa, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A

2009-09-22

405

Estimation of Heavy Metal Concentration in FBR Reprocessing Solvent Streams by Density Measurement.  

National Technical Information Service (NTIS)

The application of density measurement to heavy metal monitoring in the solvent phase is described, including practical experience gained during three fast reactor fuel reprocessing campaigns. An experimental algorithm relating heavy metal concentration a...

M. L. Brown D. J. Savage

1986-01-01

406

Testing a detection dog to locate bumblebee colonies and estimate nest density  

Microsoft Academic Search

Bumblebee nests are difficult to find, hampering ecological studies. Effective population size of bumblebees is determined\\u000a by nest density, so the ability to quantify nest density would greatly aid conservation work. We describe the training and\\u000a testing of a dog to find bumblebee nests. The dog was trained by the British army, using B. terrestris nest material. Its efficacy in

Joe Waters; Steph O’Connor; Kirsty J. Park; Dave Goulson

2011-01-01

407

Population Indices Versus Correlated Density Estimates of Black-Footed Ferret Abundance  

Microsoft Academic Search

Estimating abundance of carnivore populations is problematic because individuals typically are elusive, nocturnal, and dispersed across the landscape. Rare or endangered carnivore populations are even more difficult to estimate because of small sample sizes. Considering behavioral ecology of the target species can drastically improve survey efficiency and effectiveness. Previously, abundance of the black-footed ferret (Mustela nigripes) was monitored by spotlighting

Martin B. Grenier; Steven W. Buskirk; Richard Anderson-Sprecher

2009-01-01

408

Correlation for the estimation of the density of fatty acid esters fuels and its implications. A proposed Biodiesel Cetane Index.  

PubMed

Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853

Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

2010-06-25

409

A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation  

SciTech Connect

In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

Zhang Yumin; Lum, Kai-Yew [Temasek Laboratories, National University of Singapore, Singapore 117508 (Singapore); Wang Qingguo [Depa. Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (Singapore)

2009-03-05

410

Estimating seasonal density of blue sheep ( Pseudois nayaur ) in the Helan Mountain region using distance sampling methods  

Microsoft Academic Search

The monitoring of animal populations is necessary to conserve and manage the rare or harvest species and to understand the\\u000a population change over several years. We used distance sampling methods to estimate seasonal density of blue sheep in a 2,740 km2 area of Helan Mountain region by walking along 32 transect lines from winter 2003 to autumn 2005. In all, 367–780

Zhensheng Liu; Xiaoming Wang; Liwei Teng; Duoying Cui; Xinqing Li

2008-01-01

411

At-sea density and abundance estimates of the olive ridley turtle Lepidochelys olivacea in the eastern tropical Pacific  

Microsoft Academic Search

The first at-sea estimates of density and abundance of the olive ridley turtle Lepi- dochelys olivacea in the eastern tropical Pacific (ETP) were produced from shipboard line-transect data. Multi-ship surveys were conducted in 1992, 1998, 1999, 2000, 2003, and 2006 in the area defined by 5° N, 120° W, and 25° N and the coastline of Mexico and Central America.

Tomoharu Eguchi; Tim Gerrodette; Robert L. Pitman; Jeffrey A. Seminoff; Peter H. Dutton

2007-01-01

412

Spatially explicit capture–recapture methods to estimate minke whale density from data collected at bottom-mounted hydrophones  

Microsoft Academic Search

Estimation of cetacean abundance or density using visual methods can be cost-ineffective under many scenarios. Methods based\\u000a on acoustic data have recently been proposed as an alternative, and could potentially be more effective for visually elusive\\u000a species that produce loud sounds. Motivated by a dataset of minke whale (Balaenoptera acutorostrata) “boing” sounds detected at multiple hydrophones at the U.S. Navy’s

Tiago A. MarquesLen; Len Thomas; Stephen W. Martin; David K. Mellinger; Susan Jarvis; Ronald P. Morrissey; Carroll-Anne Ciminello; Nancy DiMarzio

413

Estimating wild boar ( Sus scrofa ) abundance and density using capture–resights in Canton of Geneva, Switzerland  

Microsoft Academic Search

We estimated wild boar abundance and density using capture–resight methods in the western part of the Canton of Geneva (Switzerland)\\u000a in the early summer from 2004 to 2006. Ear-tag numbers and transmitter frequencies enabled us to identify individuals during\\u000a each of the counting sessions. We used resights generated by self-triggered camera traps as recaptures. Program Noremark provided\\u000a Minta–Mangel and Bowden’s

C. Hebeisen; J. Fattebert; E. Baubet; C. Fischer

2008-01-01

414

Extent to which least-squares cross-validation minimises integrated square error in nonparametric density estimation  

Microsoft Academic Search

Let ho, ho and hc be the windows which minimise mean integrated square error, integrated square error and the least-squares cross-validatory criterion, respectively, for kernel density estimates. It is argued that ho, not ho, should be the benchmark for comparing different data-driven approaches to the determination of window size. Asymptotic properties of ho-ho and hc-ho, and of differences between integrated

Peter Hall; James Stephen Marron

1987-01-01

415

Estimation of Leafmine Density of Liriomyza trifolii (Diptera: Agromyzidae) in Cherry Tomato Greenhouses using Fixed Precision Sequential Sampling Plans  

Microsoft Academic Search

This study was conducted to develop sequential sampling plans to estimate leafmine density by Liriomyza trifolii (Burgess) at three fixed-precision levels in commercial tomato greenhouses. The within-greenhouse spatial patterns of leafmines were aggregated. The slopes and intercepts of Taylor's power law did not differ between greenhouses and years. A fixed-precision level sampling plan was developed using the parameters of Taylor's

Doo Hyung Lee; Jung-Joon Park; Kijong Cho

2005-01-01

416

Estimation of probability of occurrence of F1 layer or L condition using tables and electron density profile models  

NASA Astrophysics Data System (ADS)

An algorithm is proposed for evaluation of the probability of occurrence of an F1 layer or L condition, based on tables. Observations independent of the tables database are used for comparison between the estimated probability of occurrence, the formulation used at present in IRI, and the occurrence actually observed. The importance of the inclusion of L condition in the electron density profile model is shown.

Scotto, C.

2011-12-01

417

On the scaling analyses of the flux pinning force density estimated for two types of MgB2 specimens  

NASA Astrophysics Data System (ADS)

Magnetic hysteresis loops have been measured for polycrystalline and powder MgB2 samples. Hysteresis loop width ?M measured for polycrystalline sample was less than one order of magnitude smaller than that measured for powder sample. Magnetic field dependence of the critical current density estimated from the ?M is essentially divided into three regions. The irreversibility field Birr for each sample has been derived from so called Kramer plots using relevant magnetic field dependence. The scaling law of the reduced pinning force density was satisfactorily applied to the experimental results for each sample by using thus determined Birr. If we define the Birr as a field at which the critical current density becomes 106(A/m2), such scaling law is apparently applicable to polycrystalline sample and not applicable to powder sample. Such discrepancy may come from that the Birr by two types of definition belongs to different flux regime.

Matsumoto, Y.; Tanaka, H.; Nishida, A.; Akune, T.; Sakamoto, N.; Youssef, Aaa

2012-12-01

418

Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.  

PubMed

The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368

Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

2012-12-01

419

Estimating the functional form for the density dependence from life history data.  

PubMed

Two contrasting approaches to the analysis of population dynamics are currently popular: demographic approaches where the associations between demographic rates and statistics summarizing the population dynamics are identified; and time series approaches where the associations between population dynamics, population density, and environmental covariates are investigated. In this paper, we develop an approach to combine these methods and apply it to detailed data from Soay sheep (Ovis aries). We examine how density dependence and climate contribute to fluctuations in population size via age- and sex-specific demographic rates, and how fluctuations in demographic structure influence population dynamics. Density dependence contributes most, followed by climatic variation, age structure fluctuations and interactions between density and climate. We then simplify the density-dependent, stochastic, age-structured demographic model and derive a new phenomenological time series which captures the dynamics better than previously selected functions. The simple method we develop has potential to provide substantial insight into the relative contributions of population and individual-level processes to the dynamics of populations in stochastic environments. PMID:18589530

Coulson, T; Ezard, T H G; Pelletier, F; Tavecchia, G; Stenseth, N C; Childs, D Z; Pilkington, J G; Pemberton, J M; Kruuk, L E B; Clutton-Brock, T H; Crawley, M J

2008-06-01

420

Modeled Salt Density for Nuclear Material Estimation in the Treatment of Spent Nuclear Fuel  

SciTech Connect

Spent metallic nuclear fuel is being treated in a pyrometallurgical process that includes electrorefining the uranium metal in molten eutectic LiCl-KCl as the supporting electrolyte. We report a model for determining the density of the molten salt. Inventory operations account for the net mass of salt and for the mass of actinides present. It was necessary to know the molten salt density but difficult to measure, and it was decided to model the salt density for the initial treatment operations. The model assumes that volumes are additive for the ideal molten salt solution as a starting point; subsequently a correction factor for the lanthanides and actinides was developed. After applying the correction factor, the percent difference between the net salt mass in the electrorefiner and the resulting modeled salt mass decreased from more than 4.0% to approximately 0.1%. As a result, there is no need to measure the salt density at 500 C for inventory operations; the model for the salt density is found to be accurate.

DeeEarl Vaden; Robert. D. Mariani

2010-09-01

421

Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error 1  

PubMed Central

In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

2011-01-01

422

Effects of body position on lung density estimated from EIT data  

NASA Astrophysics Data System (ADS)

Normal subjects took the sitting, supine, prone, right lateral and left lateral positions during the measurement procedure. One minute epochs of EIT data were collected at the levels of the 3rd, 4th, 5th and 6th intercostal spaces in each position during normal tidal breathing. Lung density was then determined from the EIT data using the method proposed by Brown5. Lung density at the electrode level of the 6th intercostal space was different from that at almost any other levels in both male and female subjects, and lung density at the electrode levels of the 4th and 5th intercostal spaces in male subjects did not depend upon position.

Noshiro, Makoto; Ebihara, Kei; Sato, Ena; Nebuya, Satoru; Brown, Brian H.

2010-04-01

423

Sequential sampling plans for estimating European corn borer (Lepidoptera: Crambidae) and corn earworm (Lepidoptera: Noctuidae) larval density in sweet corn ears  

Microsoft Academic Search

We developed a flexible fixed-precision sequential sampling plan for estimating the density of European corn borer, Ostrinia nubilalis Hübner and corn earworm, Helicoverpa zea (Boddie), larvae, using infestation data collected from 1994 to 2000. The purpose of each sampling plan was to provide statistically sound estimates of larval densities for each pest in sweet corn ears, near harvest, with minimal

Patrick K. O’Rourke; W. D. Hutchison

2003-01-01

424

Estimating Brownian motion dispersal rate, longevity and population density from spatially explicit mark-recapture data on tropical butterflies.  

PubMed

1.?We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2.?The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3.?An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4.?We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. PMID:22320218

Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J

2012-02-09

425

Bivariate Modeling of Wind Speed and Air Density Distribution for Long-Term Wind Energy Estimation  

Microsoft Academic Search

In this paper, we investigate the feasibility of bivariate modeling of wind speed and air density based on the data from two observation sites in North Dakota and Colorado. For each site, we first obtain univariate statistical distributions for the two parameters, respectively. Excellent fitting can be achieved for wind speed for both sites using conventional univariate probability distribution functions,

Xiuli Qu; Jing Shi

2010-01-01

426

Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion  

Microsoft Academic Search

Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range

Lionel Wilson; Stephen Self

1980-01-01

427

Solar Flux Estimated from Electron Density and Ion Composition Measurements in the Lower Thermosphere.  

National Technical Information Service (NTIS)

Appropriate models of solar flux in X-rays and Extreme Ultra Violet (XEUV) bands are presented in the light of the present status of ion chemistry in the region 90 to 130 km and reliable measurements of reaction rates, electron density, and ion compositio...

P. Chakrabarty D. K. Chakrabarty A. K. Saha

1977-01-01

428

Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site  

SciTech Connect

Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variabledensity transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

Dafflon, Baptisite; Barrash, Warren; Cardiff, Michael A.; Johnson, Timothy C.

2011-12-15

429

Effects of time-series length and gauge network density on rainfall climatology estimates in Latin America  

NASA Astrophysics Data System (ADS)

Despite recent advances in the development of satellite sensors for monitoring precipitation at high spatial and temporal resolutions, the assessment of rainfall climatology still relies strongly on ground-station measurements. The Global Historical Climatology Network (GHCN) is one of the most popular stations database available for the international community. Nevertheless, the spatial distribution of these stations is not always homogeneous and the record length largely varies for each station. This study aimed to evaluate how the number of years recorded in the GHCN stations and the density of the network affect the uncertainties of annual rainfall climatology estimates in Latin America. The method applied was divided in two phases. In the first phase, Monte Carlo simulations were performed to evaluate how the number of samples and the characteristics of rainfall regime affect estimates of annual average rainfall. The simulations were performed using gamma distributions with pre-defined parameters, which generated synthetic annual precipitation records. The average and dispersion of the synthetic records were then estimated through the L-moments approach and compared with the original probability distribution that was used to produce the samples. The number of records (n) used in the simulation varied from 10 to 150, reproducing the range of number of years typically found in meteorological stations. A power function, in the form RMSE= f(n) = c.na, where the coefficients were defined as a function of the rainfall statistical dispersion, was applied to fit the errors. In the second phase of the assessment, the results of the simulations were extrapolated to real records obtained by the GHCN over Latin America, creating estimates of errors associated with number of records and rainfall characteristics in each station. To generate a spatially-explicit representation of the uncertainties, the errors in each station were interpolated using the inverse distance weighting method. Furthermore, the effect of the density of stations was also considered by penalizing the interpolated errors proportionally to the station density in the site. The results showed a large discrepancy on rainfall estimate uncertainties among Latin American countries. The uncertainties varied from less than 2% in the Southeastern region of Brazil, to around 40% in regions with low stations density and short time-series at Southern Peru. Therefore, the results highlight the importance of international cooperation for climate data sharing among Latin American countries. In this context, projects aiming at improving scientific cooperation and fostering information based policy such as EUROCLIMA and RALCEA, funded by the European Commission, offer an important opportunity for reducing uncertainties on estimates of climate variables in Latin America.

Maeda, E.; Arevalo, J.; Carmona-Moreno, C.

2012-04-01

430

Assessment of census techniques for estimating density and biomass of gibbons (Primates: Hylogbatidae)  

Microsoft Academic Search

Censuses were conducted to establish densities and biomass of the Müller’s gibbon Hylobates muelleri in two forest areas, i.e. Kayan Mentarang National Park [KMNP] and Sungai Wain Protection Forest [SWPF], East Kalimantan, Indonesia. The data were collected using three different techniques, i.e., range mapping, repeated line transects, and fixed point counts. First, range mapping within an area of 3.8 km2

V. Nijman; S. B. J. Menken

2005-01-01

431

Estimates of lightning ground flash density inAustralia and its relationship to thunder-days  

Microsoft Academic Search

A method of deriving lightning ground flash density from CIGRE lightning flash counter registrations based on the detection efficiency of the instrument, independent of the lati - tudinal variation of cloud flash-to-ground flash ratio is pre - sented. Using this method, the annual mean ground flash den - sities,N g,overaperiodofupto22yearswererecalculatedfrom the counter registrations for 17 selected Australian sites. The results

Y. Kuleshov; E. R. Jayaratne

2004-01-01

432

Estimation of electron density of ionospheric plasma using wave, impedance and topside sounder data  

Microsoft Academic Search

Instability analysis of the dispersion relation of electron plasma indicates that enhanced emission in the frequency band (fp,fu=sqrt(fp?fp+fc?fc) can be easily detected in wave spectra of space plasmas. Such emission, in passive mode spectra, can be used to determine plasma density of the major cold plasma component and points out the existence of a minor energetic component. Contrary to such

A. Kiraga; Z. Klos; H. Rothkaehl; Z. Zbyszynski; V. N Oraevsky; S. A Pulinets; I. S Prutenski

1997-01-01

433

Storage density estimation for the phase-encoding and shift multiplexing holographic optical correlator  

NASA Astrophysics Data System (ADS)

Holographic optical correlator (HOC) is applicable in occasion where the instant search throughout a huge database is demanded. The primary advantage of the HOC is its inherent parallel processing ability and large storage capacity. The HOC's searching speed is proportional to the storage density. This paper proposes a phase-encoding method in the object beam to increase the storage density. A random phase plate (RPP) is used to encode the phase of the object beam before uploading the data pages to the object beam. By shifting the RPP at a designed interval, the object beam is modulated into an orthogonal object beam to the previous one and a new group of database can be stored. Experimental results verify the proposed method. The maximum storage number of the data pages with a RPP at a fixed position can be as large as 7,500. The crosstalk among different groups of the databases can be unnoticeable. The increase in the storage density of the HOC depends on the number of the orthogonal positions from the different portions of a same RPP.

Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

2013-09-01

434

Multilayer Perceptron versus Gaussian Mixture for Class Probability Estimation with Discontinuous Underlying Prior Densities  

Microsoft Academic Search

One of the most used intelligent technique for classification is a neural network. In real classification applications the patterns of different classes often overlap. In this situation the most appropriate classifier is the one whose outputs represent the class conditional probabilities. These probabilities are calculated in traditional statistics in two steps: first the underlying prior probabilities are estimated and then

Ioan Lemeni

2009-01-01

435

Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory  

ERIC Educational Resources Information Center

|In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…

Woods, Carol M.

2008-01-01

436

Constrained Kalman filtering via density function truncation for turbofan engine health estimation  

Microsoft Academic Search

Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman

Daniel J. Simon; Donald L. Simon

2010-01-01

437

Estimation of Parent Specific DNA Copy Number in Tumors using High-Density Genotyping Arrays  

Microsoft Academic Search

Chromosomal gains and losses comprise an important type of genetic change in tumors, and can now be assayed using microarray hybridization-based experiments. Most current statistical models for DNA copy number estimate total copy number, which do not distinguish between the underlying quantities of the two inherited chromosomes. This latter information, sometimes called parent specific copy number, is important for identifying

Hao Chen; Haipeng Xing; Nancy R. Zhang; Scott Markel

2011-01-01

438

Pedotransfer functions for estimating plant available water and bulk density in Swedish agricultural soils  

Microsoft Academic Search

Pedotransfer functions (PTFs) to estimate plant available water were developed from a database of arable soils in Sweden. The PTFs were developed to fulfil the minimum requirements of any agro-hydrological application, i.e., soil water content at wilting point (?wp) and field capacity (?fc), from information that frequently is available from soil surveys such as texture and soil organic carbon content

T. Kätterer; O. Andrén; P. E. Jansson

2006-01-01

439

Optimal estimation of power spectral density by means of a time-varying autoregressive approach  

Microsoft Academic Search

A new time-varying autoregressive modeling has been proposed as a tool for time–frequency analysis of nonstationary time series. The method allows a good estimation of both frequency and amplitude of the spectrum and offers a new point of view for the evaluation of the parametric approach when applied to spectral analysis. The good performance is related to the adaptive choice

Silvia Conforto; Tommaso D'Alessio

1999-01-01

440

ESTIMATES OF DENSITIES AND FILLING FACTORS FROM A COOLING TIME ANALYSIS OF SOLAR MICROFLARES OBSERVED WITH RHESSI  

SciTech Connect

We use more than 4500 microflares from the RHESSI microflare data set to estimate electron densities and volumetric filling factors of microflare loops using a cooling time analysis. We show that if the filling factor is assumed to be unity, the calculated conductive cooling times are much shorter than the observed flare decay times, which in turn are much shorter than the calculated radiative cooling times. This is likely unphysical, but the contradiction can be resolved by assuming that the radiative and conductive cooling times are comparable, which is valid when the flare loop temperature is a maximum and when external heating can be ignored. We find that resultant radiative and conductive cooling times are comparable to observed decay times, which has been used as an assumption in some previous studies. The inferred electron densities have a mean value of 10{sup 11.6} cm{sup -3} and filling factors have a mean of 10{sup -3.7}. The filling factors are lower and densities are higher than previous estimates for large flares, but are similar to those found for two microflares by Moore et al.

Baylor, R. N.; Cassak, P. A. [Department of Physics, West Virginia University, Morgantown, WV 26506 (United States); Christe, S. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Hannah, I. G.; Hudson, H. S. [School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ (United Kingdom); Krucker, Saem; Lin, R. P. [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States); Mullan, D. J.; Shay, M. A., E-mail: rbaylor@mix.wvu.edu [Department of Physics and Astronomy and Bartol Research Institute, University of Delaware, 217 Sharp Laboratory, Newark, DE 19716 (United States)

2011-07-20

First Page Previous Page 1 2 3 4 5 6 7 8 9 10