Wavelet-based density estimation for noise reduction in plasma simulations using particles
École Normale Supérieure
Particle-based numerical methods are routinely used in plasma physics calculations [1, 2]. In many casesWavelet-based density estimation for noise reduction in plasma simulations using particles Romain Sup´erieure, Paris, France Guangye Chen Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
Wavelet-based Joint Estimation and Encoding of Depth-Image-based Representations for
Do, Minh N.
, image-based render- ing, 3D-TV, Depth-Image-Based Representation (DIBR), depth estimation, joint coding1 Wavelet-based Joint Estimation and Encoding of Depth-Image-based Representations for Free, IEEE Abstract-- We propose a wavelet-based codec for the static Depth-Image-Based Representation (DIBR
Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets
NASA Astrophysics Data System (ADS)
Cifter, Atilla
2011-06-01
This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.
Wavelet-based image estimation: an empirical Bayes approach using Jeffrey's noninformative prior
Mário A. T. Figueiredo; Robert D. Nowak
2001-01-01
The sparseness and decorrelation properties of the discrete wavelet transform have been exploited to develop powerful denoising methods. However, most of these methods have free parameters which have to be adjusted or estimated. In this paper, we propose a wavelet-based denoising technique without any free parameters; it is, in this sense, a \\
Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance.
Do, Minh N; Vetterli, Martin
2002-01-01
We present a statistical view of the texture retrieval problem by combining the two related tasks, namely feature extraction (FE) and similarity measurement (SM), into a joint modeling and classification scheme. We show that using a consistent estimator of texture model parameters for the FE step followed by computing the Kullback-Leibler distance (KLD) between estimated models for the SM step is asymptotically optimal in term of retrieval error probability. The statistical scheme leads to a new wavelet-based texture retrieval method that is based on the accurate modeling of the marginal distribution of wavelet coefficients using generalized Gaussian density (GGD) and on the existence a closed form for the KLD between GGDs. The proposed method provides greater accuracy and flexibility in capturing texture information, while its simplified form has a close resemblance with the existing methods which uses energy distribution in the frequency domain to identify textures. Experimental results on a database of 640 texture images indicate that the new method significantly improves retrieval rates, e.g., from 65% to 77%, compared with traditional approaches, while it retains comparable levels of computational complexity. PMID:18244620
Wavelet-based seismic signal estimation, detection and classification via Bayes theorem
NASA Astrophysics Data System (ADS)
Gendron, Paul J.
An application of Bayes theorem to seismic signal estimation, detection and classification is implemented with seismic events modeled as a superposition of wavelet bases. An empirical Bayes estimator is derived based on best basis arguments over block adaptive wavelet packet bases conditioned on known subband noise variances. A modified entropy functional is derived and the estimator is shown to be an adaptive shrinkage operator of coefficients in the best basis representation. Adaptation results from the updating of subband noise variance estimates. A novel robust variance estimator is presented for this context that outperforms the median based estimator for the longitudinal estimation of variance. The algorithm is tested on synthetic seismic events and compared to the discrete wavelet transform (DWT) as well as best basis selection via minimization of Stein's unbiased risk. Improvements in estimation in terms of mean squared error are sensible with the improved sparsity of representation that the best basis yields at moderate and high signal to noise ratios. An application to seismic event detection, feature extraction and classification has been developed as well. Detection and feature extraction is based on the estimated coefficients of the DWT of the seismic event by choosing bases that are known a priori to communicate useful information for discrimination. Classification of events into one of the following classes: teleseisms, regional earthquakes, near earthquakes, quarry blasts, and false alarms is accomplished with conditional class densities derived from training data by finding the maximum a posteriori probability using an empirical Bayes procedure. This algorithm is tested for detection and classification performance on the New England Seismological Network. This detection algorithm exhibits a likelihood of detection 2 times greater than that of the widely used energy transient measure termed "short-term average/long term average" (STA/LTA) under typical wideband network constraints in arbitrary conditions. Classification of seismic events via this method achieves an approximate 70% correct identification rate over a broad range of data test sets relative to a human viewer.
The wavelet-based multi-resolution motion estimation using temporal aliasing detection
NASA Astrophysics Data System (ADS)
Lee, Teahyung; Anderson, David V.
2007-01-01
In this paper, we propose a new algorithm for wavelet-based multi-resolution motion estimation (MRME) using temporal aliasing detection (TAD). In wavelet transformed image/video signals, temporal aliasing will be severe as the motion of object increases, causing the performance of the conventional MRME algorithms to drop. To overcome this problem, we perform the temporal aliasing detection and MRME simultaneously instead of using a temporal anti-aliasing filter which changes the original signal. We show that this technique gives competitive or better performance in terms of rate-distortion (RD) for slow-varying or simple-moving video signals compared to conventional MRME employing increased search area (SA).
Cox, Donald C.
2220 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 18, NO. 11, NOVEMBER 2000 Wavelet-Based Estimation of the Nonstationary Mean Signal in Wireless Systems Ravi Narasimhan, Member, IEEE, and Donald C. Cox, Fellow, IEEE Abstract--A new technique is described for estimating the non- stationary mean
Maitre, Matthieu; Shinagawa, Yoshihisa; Do, Minh N
2008-06-01
We propose a wavelet-based codec for the static depth-image-based representation, which allows viewers to freely choose the viewpoint. The proposed codec jointly estimates and encodes the unknown depth map from multiple views using a novel rate-distortion (RD) optimization scheme. The rate constraint reduces the ambiguity of depth estimation by favoring piecewise-smooth depth maps. The optimization is efficiently solved by a novel dynamic programming along trees of integer wavelet coefficients. The codec encodes the image and the depth map jointly to decrease their redundancy and to provide a RD-optimized bitrate allocation between the two. The codec also offers scalability both in resolution and in quality. Experiments on real data show the effectiveness of the proposed codec. PMID:18482889
Estimation of shock induced vorticity on irregular gaseous interfaces: a wavelet-based approach
NASA Astrophysics Data System (ADS)
Ray, J.; Jameson, L.
2005-11-01
We study the interaction of a shock with a density-stratified gaseous interface (Richtmyer Meshkov instability) with localized jagged and irregular perturbations, with the aim of developing an analytical model of the vorticity deposition on the interface immediately after the passage of the shock. The jagged perturbations, meant to simulate machining errors on the surface of a laser fusion target, are characterized using Haar wavelets. Numerical solutions of the Euler equations show that the vortex sheet deposited on the jagged interface rolls into multiple mushroom-shaped dipolar structures which begin to merge before the interface evolves into a bubble-spike structure. The peaks in the distribution of x-integrated vorticity (vorticity integrated in the direction of the shock motion) decay in time as their bases widen, corresponding to the growth and merger of the mushrooms. However, these peaks were not seen to move significantly along the interface at early times i.e. t < 10 ?, where ? is the interface traversal time of the shock. We tested our analytical model against inviscid simulations for two test cases a Mach 1.5 shock interacting with an interface with a density ratio of 3 and a Mach 10 shock interacting with a density ratio of 10. We find that this model captures the early time (t/? ˜ 1) vorticity deposition (as characterized by the first and second moments of vorticity distributions) to within 5% of the numerical results.
Wavelet-based analysis and power law classification of C/NOFS high-resolution electron density data
NASA Astrophysics Data System (ADS)
Rino, C. L.; Carrano, C. S.; Roddy, Patrick
2014-08-01
This paper applies new wavelet-based analysis procedures to low Earth-orbiting satellite measurements of equatorial ionospheric structure. The analysis was applied to high-resolution data from 285 Communications/Navigation Outage Forecasting System (C/NOFS) satellite orbits sampling the postsunset period at geomagnetic equatorial latitudes. The data were acquired during a period of progressively intensifying equatorial structure. The sampled altitude range varied from 400 to 800 km. The varying scan velocity remained within 20° of the cross-field direction. Time-to-space interpolation generated uniform samples at approximately 8 m. A maximum segmentation length that supports stochastic structure characterization was identified. A two-component inverse power law model was fit to scale spectra derived from each segment together with a goodness-of-fit measure. Inverse power law parameters derived from the scale spectra were used to classify the scale spectra by type. The largest category was characterized by a single inverse power law with a mean spectral index somewhat larger than 2. No systematic departure from the inverse power law was observed to scales greater than 100 km. A small subset of the most highly disturbed passes at the lowest sampled altitudes could be categorized by two-component power law spectra with a range of break scales from less than 100 m to several kilometers. The results are discussed within the context of other analyses of in situ data and spectral characteristics used for scintillation analyses.
Ultrasound image deconvolution in symmetrical mirror wavelet bases
NASA Astrophysics Data System (ADS)
Yeoh, Wee Soon; Zhang, Cishen; Chen, Ming; Yan, Ming
2006-03-01
Observed medical ultrasound images are degraded representations of true tissue images. The degradation is a combination of blurring due to the finite resolution of the imaging system and the observation noise. This paper presents a new wavelet based deconvolution method for medical ultrasound imaging. We design a new orthogonal wavelet basis known as the symmetrical mirror wavelet basis that can provide more desirable frequency resolution. Our proposed ultrasound image restoration with wavelets consists of an inversion of the observed ultrasound image using the estimated two-dimensional (2-D) point spread function (PSF) followed by denoising in the designed wavelet basis. The tissue image restoration is then accomplished by modelling the tissue structures with the generalized Gaussian density (GGD) function using the Bayesian estimation. Both subjective and objective measures show that the deconvolved images are more appealing in the visualization and resolution gain.
Minimum complexity density estimation
Andrew R. Barron; Thomas M. Cover
1991-01-01
The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum complexity estimator converges to
Conditional Density Estimation with Class Probability Estimators
Frank, Eibe
Conditional Density Estimation with Class Probability Estimators Eibe Frank and Remco R. Bouckaert to quantify the uncertainty inherent in a prediction. If a conditional density estimate is available conditional density estimates using a class proba- bility estimator, where this estimator is applied
A Wavelet-Based Approximation of Surface Coil Sensitivity Profiles for Correction of Image
for additional reference scans or using coil position markers for electromagnetic model-based calculationsA Wavelet-Based Approximation of Surface Coil Sensitivity Profiles for Correction of Image, Cambridge, Massachusetts Abstract: We evaluate a wavelet-based algorithm to estimate the coil sensitivity
Wavelet-based functional mixed models
Morris, Jeffrey S.; Carroll, Raymond J.
2009-01-01
Summary Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects’ wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. PMID:19759841
Multivariate Density Estimation and Visualization
Scott, David W.
Multivariate Density Estimation and Visualization David W. Scott 1 Rice University, Department Introduction This chapter examines the use of flexible methods to approximate an unknown density function, and techniques appropriate for visualization of densities in up to four dimensions. The statistical analysis
New wavelet-based video coding scheme
NASA Astrophysics Data System (ADS)
Lin, Nai-wen; Yu, Tsaifa; Huang, Jen-hau; Chan, Andrew K.
1997-04-01
A wavelet-based video compression scheme that combines a tree coder and the multiresolution motion estimation (MRME) is presented in this paper. Based on the correlation between wavelet coefficients in interlevel subbands, tree coders outperform DCT-based transform coders for still images, especially at low bit rates. They also offer many advantages such as: (1) they support progressive transmission, (2) they have control over the bit rate, (3) they eliminate blocky artifacts and (4) their image quality degrades smoothly versus lowering bit rates. In addition, multiscale wavelet representation of an image facilitates the application of the coder in the video communication environment to satisfy quality and resolution requirements from video phones to HDTV. Using multiresolution representation of successive images in a sequence, MRME offers an effective fast algorithm for block-based motion estimation/compensation. Motion vectors between successive frames at the lowest resolution are used as the reference for motion estimation at higher resolutions. Choosing a smaller block size at a lower resolution and a smaller window size at a higher resolution can speed up the time-consuming computation for motion estimation. By arranging its roots, a block-based tree coder can be used for encoding areas that cannot be easily predicted. A simplified frame and block classification used in MPEG-1 are applied adaptively for different scenes and areas. Preliminary results show that this approach is effective in decreasing computational complexity and bit rate. Further optimization and expansion of the basic scheme can make it applicable for video transmission under various bandwidth limitations.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Wavelet-based Voice Morphing ORPHANIDOU C.,
Roberts, Stephen
Wavelet-based Voice Morphing ORPHANIDOU C., Oxford Centre for Industrial and Applied Mathematics a new multi-scale voice morphing algorithm. This algorithm enables a user to transform one person, while preserving the original content. The voice morphing algorithm performs the morphing at different
Wavelet Based Estimation for Univariate Stable Laws
Antoniadis, Anestis
squares; regularization; QR decomposition. 1 #12;1 Introduction. In recent years, new classes of functions or hazard rates. See, for example, Antoniadis et al. (1994, 1997), Donoho et al. (1996), Gao (1993), and Johnstone et al. (1992). This leaves open a question which is a natural one to ask within a statistical
Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach
NASA Astrophysics Data System (ADS)
Aloui, Chaker; Jammazi, Rania
2015-10-01
In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.
Wavelet based recognition for pulsar signals
NASA Astrophysics Data System (ADS)
Shan, H.; Wang, X.; Chen, X.; Yuan, J.; Nie, J.; Zhang, H.; Liu, N.; Wang, N.
2015-06-01
A signal from a pulsar can be decomposed into a set of features. This set is a unique signature for a given pulsar. It can be used to decide whether a pulsar is newly discovered or not. Features can be constructed from coefficients of a wavelet decomposition. Two types of wavelet based pulsar features are proposed. The energy based features reflect the multiscale distribution of the energy of coefficients. The singularity based features first classify the signals into a class with one peak and a class with two peaks by exploring the number of the straight wavelet modulus maxima lines perpendicular to the abscissa, and then implement further classification according to the features of skewness and kurtosis. Experimental results show that the wavelet based features can gain comparatively better performance over the shape parameter based features not only in the clustering and classification, but also in the error rates of the recognition tasks.
Wavelet-based LASSO in functional linear regression
Zhao, Yihong; Ogden, R. Todd; Reiss, Philip T.
2011-01-01
In linear regression with functional predictors and scalar responses, it may be advantageous, particularly if the function is thought to contain features at many scales, to restrict the coefficient function to the span of a wavelet basis, thereby converting the problem into one of variable selection. If the coefficient function is sparsely represented in the wavelet domain, we may employ the well-known LASSO to select a relatively small number of nonzero wavelet coefficients. This is a natural approach to take but to date, the properties of such an estimator have not been studied. In this paper we describe the wavelet-based LASSO approach to regressing scalars on functions and investigate both its asymptotic convergence and its finite-sample performance through both simulation and real-data application. We compare the performance of this approach with existing methods and find that the wavelet-based LASSO performs relatively well, particularly when the true coefficient function is spiky. Source code to implement the method and data sets used in the study are provided as supplemental materials available online. PMID:23794794
Multivariate Density Estimation: An SVM Approach
Mukherjee, Sayan
1999-04-01
We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. ...
DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION
Mizera, Ivan
DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION ROGER KOENKER AND IVAN MIZERA Abstract. L1 based on total variation of the estimated density, its square root, and its logarithm Â and their derivatives Â in the context of univariate and bivariate density estimation, and compare the results to some
The Adequateness of Wavelet Based Model for Time Series
NASA Astrophysics Data System (ADS)
S, Rukun; Subanar; Rosadi, Dedi; Suhartono
2013-04-01
In general, time series is modeled as summation of known information i.e. historical information components, and unknown information i.e. random component. In wavelet based model, time series is represented as linear model of wavelet coefficients. Wavelet based model captures the time series feature perfectly when the historical information components dominate the process. In other hand, it has low enforcement when the random component dominates the process. This paper proposes an effort to develop the adequateness of wavelet based model, when the random component dominated the process. By weighted summation, the data is carried to the new form which has higher dependencies. Consequently, wavelet based model will work better. Finally, it is hoped that the better prediction of wavelet based model will be carried to the original prediction in reverting process.
Wavelet-based SAR image despeckling and information extraction, using particle filter.
Gleich, Dusan; Datcu, Mihai
2009-10-01
This paper proposes a new-wavelet-based synthetic aperture radar (SAR) image despeckling algorithm using the sequential Monte Carlo method. A model-based Bayesian approach is proposed. This paper presents two methods for SAR image despeckling. The first method, called WGGPF, models a prior with Generalized Gaussian (GG) probability density function (pdf) and the second method, called WGMPF, models prior with a Generalized Gaussian Markov random field (GGMRF). The likelihood pdf is modeled using a Gaussian pdf. The GGMRF model is used because it enables texture parameter estimation. The prior is modeled using GG pdf, when texture parameters are not needed. A particle filter is used for drawing particles from the prior for different shape parameters of GG pdf. When the GGMRF prior is used, the particles are drawn from prior in order to estimate noise-free wavelet coefficients and for those coefficients the texture parameter is changed in order to obtain the best textural parameters. The texture parameters are changed for a predefined set of shape parameters of GGMRF. The particles with the highest weights represents the final noise-free estimate with corresponding textural parameters. The despeckling algorithms are compared with the state-of-the-art methods using synthetic and real SAR data. The experimental results show that the proposed despeckling algorithms efficiently remove noise and proposed methods are comparable with the state-of-the-art methods regarding objective measurements. The proposed WGMPF preserves textures of the real, high-resolution SAR images well. PMID:19473938
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-03-01
The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.
Analytical form for a Bayesian wavelet estimator of images using the Bessel K form densities.
Fadili, Jalal M; Boubchir, Larbi
2005-02-01
A novel Bayesian nonparametric estimator in the Wavelet domain is presented. In this approach, a prior model is imposed on the wavelet coefficients designed to capture the sparseness of the wavelet expansion. Seeking probability models for the marginal densities of the wavelet coefficients, the new family of Bessel K forms (BKF) densities are shown to fit very well to the observed histograms. Exploiting this prior, we designed a Bayesian nonlinear denoiser and we derived a closed form for its expression. We then compared it to other priors that have been introduced in the literature, such as the generalized Gaussian density (GGD) or the alpha-stable models, where no analytical form is available for the corresponding Bayesian denoisers. Specifically, the BKF model turns out to be a good compromise between these two extreme cases (hyperbolic tails for the alpha-stable and exponential tails for the GGD). Moreover, we demonstrate a high degree of match between observed and estimated prior densities using the BKF model. Finally, a comparative study is carried out to show the effectiveness of our denoiser which clearly outperforms the classical shrinkage or thresholding wavelet-based techniques. PMID:15700528
Risk Bounds for Mixture Density Estimation
Rakhlin, Alexander
2004-01-27
In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by ...
Bayesian Density Estimation and Inference Using Mixtures
Michael D. Escobar; Mike West
1994-01-01
We describe and illustrate Bayesian inference in models for density estimation using mixturesof Dirichlet processes. These models provide natural settings for density estimation,and are exemplified by special cases where data are modelled as a sample from mixtures ofnormal distributions. Efficient simulation methods are used to approximate various prior,posterior and predictive distributions. This allows for direct inference on a variety of
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Wavelet-based analysis of circadian behavioral rhythms.
Leise, Tanya L
2015-01-01
The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453
Two new density estimators for distance sampling
S. Magnussen; C. Kleinn; N. Picard
2008-01-01
Two new density estimators for k-tree distance sampling are proposed and their performance is assessed in simulated distance sampling from 22 stem maps representing\\u000a a wide range of natural to semi-natural forest tree stands with random to irregular (clustered) spatial distribution of trees.\\u000a The new estimators are model-based. The first (Orbit) computes density as the inverse of the average of
Multivariate Density Estimation and Remote Sensing
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.
Class Conditional Density Estimation Using Mixtures with
Likas, Aristidis
Class Conditional Density Estimation Using Mixtures with Constrained Component Sharing Michalis K. Titsias and Aristidis Likas, Member, IEEE Abstract--We propose a generative mixture model classifier that allows for the class conditional densities to be represented by mixtures having certain subsets
ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS
An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...
Elements of Density Estimation Math 6070, Spring 2006
Khoshnevisan, Davar
Elements of Density Estimation Math 6070, Spring 2006 Davar Khoshnevisan University of Utah March . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 The Kernel Density Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 The Nearest-Neighborhood Density Estimator
Selection of optimal wavelet bases for image compression using SPIHT algorithm
NASA Astrophysics Data System (ADS)
Rehman, Maria; Touqir, Imran; Batool, Wajiha
2015-02-01
This paper presents the performance of several wavelet basesin SPIHT coding. Two types of wavelet bases are tested for SPIHT algorithm i.e. orthogonal and biorthogonal wavelet bases. The results of using coefficients of these bases are compared on the basis of Compression Ratio and Peak Signal to Noise Ratio. The paper shows that use of biorthogonal wavelets bases is better than orthogonal wavelet bases. Out of biorthogonal wavelets, bior 4.4 shows good results in SPIHT coding.
Estimating and Interpreting Probability Density Functions
NSDL National Science Digital Library
This 294-page document from the Bank for International Settlements stems from the Estimating and Interpreting Probability Density Functions workshop held on June 14, 1999. The conference proceedings, which may be downloaded as a complete document or by chapter, are divided into two sections: "Estimation Techniques" and "Applications and Economic Interpretation." Both contain papers presented at the conference. Also included are a list of the program participants with their affiliations and email addresses, a forward, and background notes.
Wavelet-based regularity analysis reveals recurrent spatiotemporal behavior in resting-state fMRI.
Smith, Robert X; Jann, Kay; Ances, Beau; Wang, Danny J J
2015-09-01
One of the major findings from multimodal neuroimaging studies in the past decade is that the human brain is anatomically and functionally organized into large-scale networks. In resting state fMRI (rs-fMRI), spatial patterns emerge when temporal correlations between various brain regions are tallied, evidencing networks of ongoing intercortical cooperation. However, the dynamic structure governing the brain's spontaneous activity is far less understood due to the short and noisy nature of the rs-fMRI signal. Here, we develop a wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform to measure recurrent temporal pattern stability within the rs-fMRI signal across multiple temporal scales. The method consists of performing a stationary wavelet transform to preserve signal structure, followed by construction of "lagged" subsequences to adjust for correlated features, and finally the calculation of sample entropy across wavelet scales based on an "objective" estimate of noise level at each scale. We found that the brain's default mode network (DMN) areas manifest a higher level of irregularity in rs-fMRI time series than rest of the brain. In 25 aged subjects with mild cognitive impairment and 25 matched healthy controls, wavelet-based regularity analysis showed improved sensitivity in detecting changes in the regularity of rs-fMRI signals between the two groups within the DMN and executive control networks, compared with standard multiscale entropy analysis. Wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform is a promising technique to characterize the dynamic structure of rs-fMRI as well as other biological signals. Hum Brain Mapp 36:3603-3620, 2015. © 2015 Wiley Periodicals, Inc. PMID:26096080
Wavelet-based Real Time Detection of Network Traffic Anomalies
Huang, Chin-Tser
intrusion detection systems. These systems largely use signature or pattern matching techniques at the core, for real time wavelet-based analysis of network traffic anomalies. Then, we use two metrics, namely MIT Lincoln Laboratory Intrusion Detection System Evaluation data set was used. From these data sets
3D Wavelet-Based Filter and Method
Moss, William C. (San Mateo, CA); Haase, Sebastian (San Francisco, CA); Sedat, John W. (San Francisco, CA)
2008-08-12
A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.
Wavelet-based Prediction Measures for Lossy Image Set Compression
Cheng, Howard
Wavelet-based Prediction Measures for Lossy Image Set Compression Marc Moreau1, Howard Cheng1 1 is a numeric measure used to quantify how similar two images are to each other from the point of view. The proposed measure performs better than previous measures proposed in the literature. Keywords: image set
Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures
Percival, Don
- tations Xt recorded at dam wall (temperature is regarded as important driver for other water qualityWavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures Don Percival Applied and treatment services to ensure quality and quantity of water supplied to Southeast Queensland · ongoing
Wavelet-Based Feature Extraction for Microarray Data Classification
Kwok, James Tin-Yau
to humans, it has become one of the top life threats. In recent years, the study of DNA microarray hasWavelet-Based Feature Extraction for Microarray Data Classification Shutao Li, Chen Liao, James T. Kwok Abstract-- Microarray data typically have thousands of genes, and thus feature extraction
Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints
I. W. Ginsberg
1999-09-01
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.
Wavelet-based analysis of blood pressure dynamics in rats
NASA Astrophysics Data System (ADS)
Pavlov, A. N.; Anisimov, A. A.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznecova, A. S.; Matasova, E. G.
2009-02-01
Using a wavelet-based approach, we study stress-induced reactions in the blood pressure dynamics in rats. Further, we consider how the level of the nitric oxide (NO) influences the heart rate variability. Clear distinctions for male and female rats are reported.
ADAPTIVE DENSITY ESTIMATION WITH MASSIVE DATA SETS
Scott, David W.
ADAPTIVE DENSITY ESTIMATION WITH MASSIVE DATA SETS David W. Scott, Rice University Masahiko Sagae crossvalidation criterion. For massive data sets, the promise of having sufficient data to do lo cally will be provided. 1. Challenge of Massive Data Massive data sets (MDS) represent one of the grand challenge
Estimating density of Florida Key deer
Roberts, Clay Walton
2006-08-16
Florida Key deer (Odocoileus virginianus clavium) were listed as endangered by the U.S. Fish and Wildlife Service (USFWS) in 1967. A variety of survey methods have been used in estimating deer density and/or changes in population trends...
Sampling, Density Estimation and Spatial Relationships
NSDL National Science Digital Library
Maggie Haag (University of Alberta; )
1998-01-01
This resource serves as a tool used for instructing a laboratory exercise in ecology. Students obtain hands-on experience using techniques such as, mark-recapture and density estimation and organisms such as, zooplankton and fathead minnows. This exercise is suitable for general ecology and introductory biology courses.
Density Estimation for Projected Exoplanet Quantities
NASA Astrophysics Data System (ADS)
Brown, Robert A.
2011-05-01
Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of "projected" mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth ?. We calibrate the dependence of ? on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n -log 2. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements—particularly on the minimum sample size n—for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.
DENSITY ESTIMATION FOR PROJECTED EXOPLANET QUANTITIES
Brown, Robert A.
2011-05-20
Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of 'projected' mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth {sigma}. We calibrate the dependence of {sigma} on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n{sup -log2}. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements-particularly on the minimum sample size n-for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.
Review of methods for estimating cetacean density from passivecetacean density from passive
Thomas, Len
Review of methods for estimating cetacean density from passivecetacean density from passive:· Objectives: 1. Develop methods for estimating the density of cetaceans from fixed passive acousticOutline · Part I: Review of density estimation inPart I: Review of density estimation in cetaceans, and passive
Coding sequence density estimation via topological pressure.
Koslicki, David; Thompson, Daniel J
2015-01-01
We give a new approach to coding sequence (CDS) density estimation in genomic analysis based on the topological pressure, which we develop from a well known concept in ergodic theory. Topological pressure measures the 'weighted information content' of a finite word, and incorporates 64 parameters which can be interpreted as a choice of weight for each nucleotide triplet. We train the parameters so that the topological pressure fits the observed coding sequence density on the human genome, and use this to give ab initio predictions of CDS density over windows of size around 66,000 bp on the genomes of Mus Musculus, Rhesus Macaque and Drososphilia Melanogaster. While the differences between these genomes are too great to expect that training on the human genome could predict, for example, the exact locations of genes, we demonstrate that our method gives reasonable estimates for the 'coarse scale' problem of predicting CDS density. Inspired again by ergodic theory, the weightings of the nucleotide triplets obtained from our training procedure are used to define a probability distribution on finite sequences, which can be used to distinguish between intron and exon sequences from the human genome of lengths between 750 and 5,000 bp. At the end of the paper, we explain the theoretical underpinning for our approach, which is the theory of Thermodynamic Formalism from the dynamical systems literature. Mathematica and MATLAB implementations of our method are available at http://sourceforge.net/projects/topologicalpres/ . PMID:24448658
Statistical analysis of brain tissue images in the wavelet domain: wavelet-based morphometry.
Canales-Rodríguez, Erick Jorge; Radua, Joaquim; Pomarol-Clotet, Edith; Sarró, Salvador; Alemán-Gómez, Yasser; Iturria-Medina, Yasser; Salvador, Raymond
2013-05-15
Wavelet-based methods have been developed for statistical analysis of functional MRI and PET data, where the wavelet transformation is employed as a tool for efficient signal representation. A number of studies using these approaches have reported better estimation capabilities, in terms of increased sensitivity and specificity, than the standard statistical analyses in the spatial domain. In line with these previous studies, the present report proposes a statistical analysis in the wavelet domain for the estimation of inter-group differences from structural MRI data. The procedure, called wavelet-based morphometry (WBM), was implemented under a voxel-based morphometry (VBM) style analysis. It was evaluated by comparing the gray-matter images of a group of 32 healthy subjects whose images were artificially altered to induce thinning of the cortex, with a different group of 32 healthy subjects whose images were unaltered. In order to quantify the performance of the reconstruction from a practical perspective, the same comparison was also conducted with standard VBM using SPM's Gaussian random fields and FSL's cluster-based statistics, family-wise error corrected, for datasets spatially-normalized via two different registration methods (i.e., SyN and FNIRT). The effect of using different amounts of smoothing, Battle-Lemarié filters and resolution levels in the wavelet transform was also investigated. Results support the proposed approach as a different and promising methodology to assess the structural morphometric differences between different populations of subjects. PMID:23384522
Bird population density estimated from acoustic signals
Dawson, D.K.; Efford, M.G.
2009-01-01
Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.
Wavelet-based statistical signal processing using hidden Markov models
Matthew S. Crouse; Robert D. Nowak; Richard G. Baraniuk
1998-01-01
Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are
Non-destructive wavelet-based despeckling in SAR images
NASA Astrophysics Data System (ADS)
Bekhtin, Yuri S.; Bryantsev, Andrey A.; Malebo, Damiao P.; Lupachev, Alexey A.
2014-10-01
The suggested wavelet-based despeckling method for multi-look SAR images does not use any thresholding and window processing to avoid ringing artifacts, blurring, fusion of edges, etc. Instead, the logical operation of comparison is applied to wavelet coefficients which are presented in spatial oriented trees (SOTs) of wavelet decomposition calculated for one and the same region of the earth surface during SAR spacecraft flight. Fusion of SAR images is decided by keeping the smallest wavelet coefficients from different SOTs in high frequency subbands (details). The wavelet coefficients related to the low frequency subband (approximation) are processed by another special logical operation providing with a good smoothing. It is because the described procedure depends on properties of the chosen wavelet basis then the library of wavelet bases is applied. The procedure is repeated for each wavelet basis. To select the best SOTs (and hence, the best wavelet basis) there is the special cost function which considers the SOTs as so-called coherent structures and shows which of wavelet bases brings the maximum entropy. The results of computer modeling and comparison with few well-known despeckling procedures have shown the superb quality of the proposed method in the sense of different criteria as PSNR, SSIM, etc.
Improved Astronomical Inferences via Nonparametric Density Estimation
NASA Astrophysics Data System (ADS)
Schafer, Chad
2010-01-01
Nonparametric and semiparametric approaches to density estimation can yield scientific insights unavailable when restrictive assumptions are made regarding the form of the distribution. Further, when a well-chosen dimension reduction technique is utilized, the distribution of high-dimensional data (e.g., spectra, images) can be characterized via a nonparametric approach. The hope is that these procedures will preserve a large amount of the rich information in these data. Ideas will be illustrated via a semiparametric approach to estimating luminosity functions (Schafer, 2007) and recent work on characterizing the evolution of the distribution of galaxy morphology. This is joint work with Peter Freeman, Susan Buchman, and Ann Lee. Work is supported by NASA AISR Grant.
Traffic characterization and modeling of wavelet-based VBR encoded video
Yu Kuo; Jabbari, B.; Zafar, S.
1997-07-01
Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.
Probability Density Function Estimation Using Orthogonal Forward Regression
Chen, Sheng
Probability Density Function Estimation Using Orthogonal Forward Regression S. Chen, X. Hong and C.J. Harris Abstract-- Using the classical Parzen window estimate as the target function, the kernel density to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one- out
Review of methods for estimating cetacean density from passive
Thomas, Len
Review of methods for estimating cetacean density from passive acoustics Len Thomas and Tiago Marques SIO Symposium: Estimating cetacean density from Passive Acoustics 16th July 2009 www for estimating the density of cetaceans from fixed passive acoustic devices. Methods should be applicable
Density Estimation with Stagewise Optimization of the Empirical Risk
KlemelÃ¤, Jussi
Density Estimation with Stagewise Optimization of the Empirical Risk Jussi KlemelÂ¨a University +49 621 1811931 July 12, 2006 Abstract We consider multivariate density estimation with identically distributed observations. We study a density estimator which is a convex com- bination of functions
DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda
Magdon-Ismail, Malik
DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda Malik Magdon Abstract In this paper we consider two important topics: density estimation and random variate generation. First, we develop two new methods for density estimation, a stochastic method and a related
Density Estimation with Stagewise Optimization of the Empirical Risk
KlemelÃ¤, Jussi
Density Estimation with Stagewise Optimization of the Empirical Risk Jussi KlemelË?a University +49 621 1811931 July 12, 2006 Abstract We consider multivariate density estimation with identically distributed observations. We study a density estimator which is a convex comÂ bination of functions
ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER
This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...
EEG analysis using wavelet-based information tools.
Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A
2006-06-15
Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027
An EM algorithm for wavelet-based image restoration.
Figueiredo, Mário A T; Nowak, Robert D
2003-01-01
This paper introduces an expectation-maximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with low-complexity, expressed in the wavelet coefficients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated wavelet-based restoration but, except for certain special cases, the resulting criteria are solved approximately or require demanding optimization methods. The EM algorithm herein proposed combines the efficient image representation offered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. Thus, it is a general-purpose approach to wavelet-based image restoration with computational complexity comparable to that of standard wavelet denoising schemes or of frequency domain deconvolution methods. The algorithm alternates between an E-step based on the fast Fourier transform (FFT) and a DWT-based M-step, resulting in an efficient iterative process requiring O(N log N) operations per iteration. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach performs competitively with, in some cases better than, the best existing methods in benchmark tests. PMID:18237964
A New Wavelet Based Approach to Assess Hydrological Models
NASA Astrophysics Data System (ADS)
Adamowski, J. F.; Rathinasamy, M.; Khosa, R.; Nalley, D.
2014-12-01
In this study, a new wavelet based multi-scale performance measure (Multiscale Nash Sutcliffe Criteria, and Multiscale Normalized Root Mean Square Error) for hydrological model comparison was developed and tested. The new measure provides a quantitative measure of model performance across different timescales. Model and observed time series are decomposed using the a trous wavelet transform, and performance measures of the model are obtained at each time scale. The usefulness of the new measure was tested using real as well as synthetic case studies. The real case studies included simulation results from the Soil Water Assessment Tool (SWAT), as well as statistical models (the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods). Data from India and Canada were used. The synthetic case studies included different kinds of errors (e.g., timing error, as well as under and over prediction of high and low flows) in outputs from a hydrologic model. It was found that the proposed wavelet based performance measures (i.e., MNSC and MNRMSE) are a more reliable measure than traditional performance measures such as the Nash Sutcliffe Criteria, Root Mean Square Error, and Normalized Root Mean Square Error. It was shown that the new measure can be used to compare different hydrological models, as well as help in model calibration.
Wavelet-Based Signal and Image Processing for Target Recognition
NASA Astrophysics Data System (ADS)
Sherlock, Barry G.
2002-11-01
The PI visited NSWC Dahlgren, VA, for six weeks in May-June 2002 and collaborated with scientists in the G33 TEAMS facility, and with Marilyn Rudzinsky of T44 Technology and Photonic Systems Branch. During this visit the PI also presented six educational seminars to NSWC scientists on various aspects of signal processing. Several items from the grant proposal were completed, including (1) wavelet-based algorithms for interpolation of 1-d signals and 2-d images; (2) Discrete Wavelet Transform domain based algorithms for filtering of image data; (3) wavelet-based smoothing of image sequence data originally obtained for the CRITTIR (Clutter Rejection Involving Temporal Techniques in the Infra-Red) project. The PI visited the University of Stellenbosch, South Africa to collaborate with colleagues Prof. B.M. Herbst and Prof. J. du Preez on the use of wavelet image processing in conjunction with pattern recognition techniques. The University of Stellenbosch has offered the PI partial funding to support a sabbatical visit in Fall 2003, the primary purpose of which is to enable the PI to develop and enhance his expertise in Pattern Recognition. During the first year, the grant supported publication of 3 referred papers, presentation of 9 seminars and an intensive two-day course on wavelet theory. The grant supported the work of two students who functioned as research assistants.
Majorization-minimization algorithms for wavelet-based image restoration.
Figueiredo, Mário A T; Bioucas-Dias, José M; Nowak, Robert D
2007-12-01
Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios. PMID:18092597
Wavelet-Based Extraction of Coherent Vortices from High Reynolds Number Homogeneous
École Normale Supérieure
rue Lhomond, 75231 Paris Cedex 05, France farge@lmd.ens.fr Abstract. A wavelet-based method to extract in wavelet space [3, 4, 5, 6]. A wavelet-based coherent vortex extraction (CVE) method for two-dimen- sional-based CVE algorithm is applied to data ob- tained by DNS of three-dimensional incompressible homogeneous
A Bayesian Density Estimation Algorithm \\Lambda Stefanos Manganaris y
Fisher, Douglas H.
A Bayesian Density Estimation Algorithm \\Lambda Stefanos Manganaris y Dept. of Computer Science Vanderbilt University Box 1679, Station B Nashville, TN 37235, U.S.A. March 13, 1996 Abstract Density a simÂ ple nonparametric method for univariate density estimation that uses Bayesian inference
Density estimation with multivariate histograms and best basis selection
KlemelÃ¤, Jussi
Density estimation with multivariate histograms and best basis selection Jussi KlemelÂ¨a Department@rumms.uni-mannheim.de Fax +49 621 1811931 November 17, 2006 Abstract We consider estimation of multivariate densities the optimal amount of presmoothing depends on the spatial inhomogeneity of the density. Mathematics Subject
Bayesian network classification using spline-approximated kernel density estimation
Yaniv Gurwicz; Boaz Lerner
2005-01-01
The likelihood for patterns of continuous features needed for probabilistic inference in a Bayesian network classifier (BNC) may be computed by kernel density estimation (KDE), letting every pattern influence the shape of the probability density. Although usually leading to accurate estimation, the KDE suffers from computational cost making it unpractical in many real-world applications. We smooth the density using a
Mammographic Density Estimation with Automated Volumetric Breast Density Measurement
Ko, Su Yeon; Kim, Eun-Kyung; Kim, Min Jung
2014-01-01
Objective To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. Materials and Methods In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. Results The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p = 0.001 to 0.015). Conclusion There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density. PMID:24843235
Remarks on Some Nonparametric Estimates of a Density Function
Murray Rosenblatt
1956-01-01
This note discusses some aspects of the estimation of the density function of a univariate probability distribution. All estimates of the density function satisfying relatively mild conditions are shown to be biased. The asymptotic mean square error of a particular class of estimates is evaluated.
Noisy Independent Factor Analysis Model for Density Estimation and Classification
Amato, U.
2009-06-09
We consider the problem of multivariate density estimation when the unknown density is assumed to follow a particular form of dimensionality reduction, a noisy independent factor analysis (IFA) model. In this model the ...
Force Estimation and Prediction from Time-Varying Density Images
Ratilal, Purnima
We present methods for estimating forces which drive motion observed in density image sequences. Using these forces, we also present methods for predicting velocity and density evolution. To do this, we formulate and apply ...
ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY
Jamieson, Bruce
ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY Daehyun Kim 1 ABSTRACT: At the ISSW 2000, Geldsetzer and Jamieson presented empirical relations between the density density and water equivalent (e.g. because the layer was too thin for the density sampler
Dynamic wavelet-based tool for gearbox diagnosis
NASA Astrophysics Data System (ADS)
Omar, Farag K.; Gaouda, A. M.
2012-01-01
This paper proposes a novel wavelet-based technique for detecting and localizing gear tooth defects in a noisy environment. The proposed technique utilizes a dynamic windowing process while analyzing gearbox vibration signals in the wavelet domain. The gear vibration signal is processed through a dynamic Kaiser's window of varying parameters. The window size, shape, and sliding rate are modified towards increasing the similarity between the non-stationary vibration signal and the selected mother wavelet. The window parameters are continuously modified until they provide maximum wavelet coefficients localized at the defected tooth. The technique is applied on laboratory data corrupted with high noise level. The technique has shown accurate results in detecting and localizing gear tooth fracture with different damage severity.
A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring
Liao, T. W. [Louisiana State University; Ting, C.F. [Louisiana State University; Qu, Jun [ORNL; Blau, Peter Julian [ORNL
2007-01-01
Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
Density estimation using the trapping web design: A geometric analysis
Link, W.A.; Barker, R.J.
1994-01-01
Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
Quantitative comparison of estimations for the density within pedestrian streams
NASA Astrophysics Data System (ADS)
Tordeux, Antoine; Zhang, Jun; Steffen, Bernhard; Seyfried, Armin
2015-06-01
In this work, the precision of estimators for the density within unidirectional pedestrian streams is evaluated. The analysis is done in controllable systems where the density is homogeneous and all the characteristics are known. The objectives are to estimate the global density with local measurements or density profile at high spatial resolution with no bias and low fluctuations. The classical estimation using discrete numbers of observed pedestrians is compared to continuous estimators using spacing distance, Voronoi diagram, Gaussian kernel as well as maximum likelihood. Mean squared error and bias of the estimators are calculated from empirical data and Monte Carlo experiments. The results show quantitatively how continuous approaches improve the precision of the estimations.
Morphology driven density distribution estimation for small bodies
NASA Astrophysics Data System (ADS)
Takahashi, Yu; Scheeres, D. J.
2014-05-01
We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field environment in close proximity to small body’s surface for accurate trajectory/safe navigation purposes to be used for future missions.
Locally adaptive complex wavelet-based demosaicing for color filter array images
NASA Astrophysics Data System (ADS)
Aelterman, Jan; Goossens, Bart; Pižurica, Aleksandra; Philips, Wilfried
2009-02-01
A new approach for wavelet-based demosaicing of color filter array (CFA) images is presented. It is observed that conventional wavelet-based demosaicing results in demosaicing artifacts in high spatial frequency regions of the image. By proposing a framework of locally adaptive demosaicing in the wavelet domain, the presented method proposes computationally simple techniques to avoid these artifacts. In order to reduce computation time and memory requirements even more, we propose the use of the dual tree complex wavelet transform. The results show that wavelet-based demosaicing, using the proposed locally adaptive framework, is visually comparable with state-of-the-art pixel based demosaicing. This result is very promising when considering a low complexity wavelet-based demosaicing and denoising approach.
A novel wavelet-based finite element method for the analysis of rotor-bearing systems
Jiawei Xiang; Dongdi Chen; Xuefeng Chen; Zhengjia He
2009-01-01
The rotor dynamic theory, combined with finite element method, has been widely used over the last three decades in order to calculate the dynamic parameters in rotor-bearing systems. Since the wavelet-based elements offer multi-scale models, particularly in modeling complex systems, the wavelet-based rotating shaft elements are constructed to model rotor-bearing systems. The effects of translational and rotatory inertia, the gyroscopic
On a Wavelet-Based Method for the Numerical Simulation of Wave Propagation
Tae-Kyung Hong; B. L. N. Kennett
2002-01-01
A wavelet-based method for the numerical simulation of acoustic and elastic wave propagation is developed. Using a displacement-velocity formulation and treating spatial derivatives with linear operators, the wave equations are rewritten as a system of equations whose evolution in time is controlled by first-order derivatives. The linear operators for spatial derivatives are implemented in wavelet bases using an operator projection
WAVELET-BASED SPECTRAL SMOOTHING FOR HEAD-RELATED TRANSFER FUNCTION FILTER DESIGN
HUSEYIN HACIHABIBOGLU; BANU GUNEL; FIONN MURTAGH
2002-01-01
Three wavelet-based spectral smoothing techniques are presented in this paper as a pre-processing stage for head- related transfer function (HRTF) filter design. These wavelet-based methods include wavelet denoising, wavelet approximation, and redundant wavelet transform. These methods are used with time-domain parametric filter design methods to reduce the order of the IIR filters which is useful for real-time implementation of immersive
An image adaptive, wavelet-based watermarking of digital images
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia
2007-12-01
In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.
A wavelet-based method for multispectral face recognition
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian
2012-06-01
A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.
Structural damage localization using wavelet-based silhouette statistics
NASA Astrophysics Data System (ADS)
Jung, Uk; Koh, Bong-Hwan
2009-04-01
This paper introduces a new methodology for classifying and localizing structural damage in a truss structure. The application of wavelet analysis along with signal classification techniques in engineering problems allows us to discover novel characteristics that can be used for the diagnosis and classification of structural defects. This study exploits the data discriminating capability of silhouette statistics, which is eventually combined with the wavelet-based vertical energy threshold technique for the purpose of extracting damage-sensitive features and clustering signals of the same class. This threshold technique allows us to first obtain a suitable subset of the extracted or modified features of our data, i.e. good predictor sets should contain features that are strongly correlated to the characteristics of the data without considering the classification method used, although each of these features should be as uncorrelated with each other as possible. The silhouette statistics have been used to assess the quality of clustering by measuring how well an object is assigned to its corresponding cluster. We use this concept for the discriminant power function used in this paper. The simulation results of damage detection in a truss structure show that the approach proposed in this study can be successfully applied for locating both open- and breathing-type damage even in the presence of a considerable amount of process and measurement noise. Finally, a typical data mining tool such as classification and regression tree (CART) quantitatively evaluates the performance of the damage localization results in terms of the misclassification error.
Experimental and numerical evaluation of wavelet based damage detection methodologies
NASA Astrophysics Data System (ADS)
Quiñones, Mireya M.; Montejo, Luis A.; Jang, Shinae
2015-03-01
This article presents an evaluation of the capabilities of wavelet-based methodologies for damage identification in civil structures. Two different approaches were evaluated: (1) analysis of the structure frequencies evolution by means of the continuous wavelet transform and (2) analysis of the singularities generated in the high frequency response of the structure through the detail functions obtained via fast wavelet transform. The methodologies were evaluated using experimental and numerical simulated data. It was found that the selection of appropriate wavelet parameters is critical for a successful analysis of the signal. Wavelet parameters should be selected based on the expected frequency content of the signal and desired time and frequency resolutions. Identifications of frequency shifts via ridge extraction of the wavelet map were successful in most of the experimental and numerical scenarios investigated. Moreover, the frequency shift can be inferred most of the time but the exact time at which it occurs is not evident. However, this information can be retrieved from the spike location from the Fast Wavelet Transform analysis. Therefore, it is recommended to perform both type of analysis and look at the results together.
Wavelet-based characterization of gait signal for neurological abnormalities.
Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S
2015-02-01
Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. PMID:25661004
Density estimation and random variate generation using multilayer networks.
Magdon-Ismail, M; Atiya, A
2002-01-01
In this paper we consider two important topics: density estimation and random variate generation. We present a framework that is easily implemented using the familiar multilayer neural network. First, we develop two new methods for density estimation, a stochastic method and a related deterministic method. Both methods are based on approximating the distribution function, the density being obtained by differentiation. In the second part of the paper, we develop new random number generation methods. Our methods do not suffer from some of the restrictions of existing methods in that they can be used to generate numbers from any density provided that certain smoothness conditions are satisfied. One of our methods is based on an observed inverse relationship between the density estimation process and random number generation. We present two variants of this method, a stochastic, and a deterministic version. We propose a second method that is based on a novel control formulation of the problem, where a "controller network" is trained to shape a given density into the desired density. We justify the use of all the methods that we propose by providing theoretical convergence results. In particular, we prove that the L(infinity) convergence to the true density for both the density estimation and random variate generation techniques occurs at a rate O((log log N/N)((1-epsilon)/2)) where N is the number of data points and epsilon can be made arbitrarily small for sufficiently smooth target densities. This bound is very close to the optimally achievable convergence rate under similar smoothness conditions. Also, for comparison, the (2) root mean square (rms) convergence rate of a positive kernel density estimator is O(N(-2/5)) when the optimal kernel width is used. We present numerical simulations to illustrate the performance of the proposed density estimation and random variate generation methods. In addition, we present an extended introduction and bibliography that serves as an overview and reference for the practitioner. PMID:18244452
Nonparametric probability density estimation for data analysis in several dimensions
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Nonparametric probability density estimates, in particular the corresponding contour curves, it is shown, are a useful adjunct to scatter diagrams when performing a preliminary examination of a set of random data in several dimensions.
Fast Nonparametric Conditional Density Estimation Michael P. Holmes
Isbell, Charles L.
applications to previously in- tractable large multivariate datasets, includ- ing a redshift prediction problem community. Note that what we mean by nonparametric conditional density estimation is dif- ferent from other
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Estimating maritime snow density from seasonal climate variables
NASA Astrophysics Data System (ADS)
Bormann, K. J.; Evans, J. P.; Westra, S.; McCabe, M. F.; Painter, T. H.
2013-12-01
Snow density is a complex parameter that influences thermal, optical and mechanical snow properties and processes. Depth-integrated properties of snowpacks, including snow density, remain very difficult to obtain remotely. Observations of snow density are therefore limited to in-situ point locations. In maritime snowfields such as those in Australia and in parts of the western US, snow densification rates are enhanced and inter-annual variability is high compared to continental snow regions. In-situ snow observation networks in maritime climates often cannot characterise the variability in snowpack properties at spatial and temporal resolutions required for many modelling and observations-based applications. Regionalised density-time curves are commonly used to approximate snow densities over broad areas. However, these relationships have limited spatial applicability and do not allow for interannual variability in densification rates, which are important in maritime environments. Physically-based density models are relatively complex and rely on empirical algorithms derived from limited observations, which may not represent the variability observed in maritime snow. In this study, seasonal climate factors were used to estimate late season snow densities using multiple linear regressions. Daily snow density estimates were then obtained by projecting linearly to fresh snow densities at the start of the season. When applied spatially, the daily snow density fields compare well to in-situ observations across multiple sites in Australia, and provide a new method for extrapolating existing snow density datasets in maritime snow environments. While the relatively simple algorithm for estimating snow densities has been used in this study to constrain snowmelt rates in a temperature-index model, the estimates may also be used to incorporate variability in snow depth to snow water equivalent conversion.
Mean thermospheric density estimation derived from satellite constellations
NASA Astrophysics Data System (ADS)
Li, Alan; Close, Sigrid
2015-10-01
This paper defines a method to estimate the mean neutral density of the thermosphere given many satellites of the same form factor travelling in similar regions of space. A priori information to the estimation scheme include ranging measurements and a general knowledge of the onboard ADACS, although precise measurements are not required for the latter. The estimation procedure seeks to utilize order statistics to estimate the probability of the minimum drag coefficient achievable, and amalgamating all measurements across multiple time periods allows estimation of the probability density of the ballistic factor itself. The model does not depend on prior models of the atmosphere; instead we require estimation of the minimum achievable drag coefficient which is based upon physics models of simple shapes in free molecular flow. From the statistics of the minimum, error statistics on the estimated atmospheric density can be calculated. Barring measurement errors from the ranging procedure itself, it is shown that with a constellation of 10 satellites, we can achieve a standard deviation of roughly 4% on the estimated mean neutral density. As more satellites are added to the constellation, the result converges towards the lower limit of the achievable drag coefficient, and accuracy becomes limited by the quality of the ranging measurements and the probability of the accommodation coefficient. Comparisons are made to existing atmospheric models such as NRLMSISE-00 and JB2006.
Unbiased estimators of wildlife population densities using aural information
Durland, Eric Newton
1969-01-01
of whitewing doves that has been developed is to have an experienced observer walk through a colony and estimate the colony's density from the intensity of the calling. Though seemingly crude, this method has been shown to be amazingly accurate when... that will apply to a wide range of wildlife surveying situations. CHAPTER II DERIVATION OF ESTIMATORS 2. 1 Notation The method to be used in developing estimators from aural in- formation is to consider a series of situations of increasing com- plexity...
Fast wavelet-based image characterization for highly adaptive image retrieval.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Cochener, Béatrice; Roux, Christian
2012-04-01
Adaptive wavelet-based image characterizations have been proposed in previous works for content-based image retrieval (CBIR) applications. In these applications, the same wavelet basis was used to characterize each query image: This wavelet basis was tuned to maximize the retrieval performance in a training data set. We take it one step further in this paper: A different wavelet basis is used to characterize each query image. A regression function, which is tuned to maximize the retrieval performance in the training data set, is used to estimate the best wavelet filter, i.e., in terms of expected retrieval performance, for each query image. A simple image characterization, which is based on the standardized moments of the wavelet coefficient distributions, is presented. An algorithm is proposed to compute this image characterization almost instantly for every possible separable or nonseparable wavelet filter. Therefore, using a different wavelet basis for each query image does not considerably increase computation times. On the other hand, significant retrieval performance increases were obtained in a medical image data set, a texture data set, a face recognition data set, and an object picture data set. This additional flexibility in wavelet adaptation paves the way to relevance feedback on image characterization itself and not simply on the way image characterizations are combined. PMID:22194244
A wavelet-based noise reduction algorithm and its clinical evaluation in cochlear implants.
Ye, Hua; Deng, Guang; Mauger, Stefan J; Hersbach, Adam A; Dawson, Pam W; Heasman, John M
2013-01-01
Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise. PMID:24086605
Ultrasonic velocity for estimating density of structural ceramics
NASA Technical Reports Server (NTRS)
Klima, S. J.; Watson, G. K.; Herbell, T. P.; Moore, T. J.
1981-01-01
The feasibility of using ultrasonic velocity as a measure of bulk density of sintered alpha silicon carbide was investigated. The material studied was either in the as-sintered condition or hot isostatically pressed in the temperature range from 1850 to 2050 C. Densities varied from approximately 2.8 to 3.2 g cu cm. Results show that the bulk, nominal density of structural grade silicon carbide articles can be estimated from ultrasonic velocity measurements to within 1 percent using 20 MHz longitudinal waves and a commercially available ultrasonic time intervalometer. The ultrasonic velocity measurement technique shows promise for screening out material with unacceptably low density levels.
Non-local crime density estimation incorporating housing information
Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.
2014-01-01
Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817
Non-local crime density estimation incorporating housing information.
Woodworth, J T; Mohler, G O; Bertozzi, A L; Brantingham, P J
2014-11-13
Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H(1) Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817
Density estimation using KNN and a potential model
NASA Astrophysics Data System (ADS)
Lu, Yonggang; Qiao, Jiangang; Liao, Li; Yang, Wuyang
2013-10-01
Density-based clustering methods are usually more adaptive than other classical methods in that they can identify clusters of various shapes and can handle noisy data. A novel density estimation method is proposed using both the knearest neighbor (KNN) graph and a hypothetical potential field of the data points to capture the local and global data distribution information respectively. An initial density score computed using KNN is used as the mass of the data point in computing the potential values. Then the computed potential is used as the new density estimation, from which the final clustering result is derived. All the parameters used in the proposed method are determined from the input data automatically. The new clustering method is evaluated by comparing with K-means++, DBSCAN, and CSPV. The experimental results show that the proposed method can determine the number of clusters automatically while producing competitive clustering results compared to the other three methods.
Kernel density estimation of a multidimensional efficiency profile
NASA Astrophysics Data System (ADS)
Poluektov, A.
2015-02-01
Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the ?0b ? D0p?? decay.
NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES
Seregin, Arseni; Wellner, Jon A.
2011-01-01
We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ? ?d and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e?y for y ? ?; in this case, the resulting class of densities P(e?y)={p=exp(?g):gis convex}is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p? exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e?y) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x0 under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.
Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S
2008-10-01
The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters. PMID:18784025
Comparison of parzen density and frequency histogram as estimators of probability density functions.
Glavinovi?, M I
1996-01-01
In neurobiology, and in other fields, the frequency histogram is a traditional tool for determining the probability density function (pdf) of random processes, although other methods have been shown to be more efficient as their estimators. In this study, the frequency histogram is compared with the Parzen density estimator, a method that consists of convolving each measurement with a weighting function of choice (Gaussian, rectangular, etc) and using their sum as an estimate of the pdf of the random process. The difference in their performance in evaluating two types of pdfs that occur commonly in quantal analysis (monomodal and multimodal with equidistant peaks) is demonstrated numerically by using the integrated square error criterion and assuming a knowledge of the "true" pdf. The error of the Parzen density estimates decreases faster as a function of the number of observations than that of the frequency histogram, indicating that they are asymptotically more efficient. A variety of "reasonable" weighting functions can provide similarly efficient Parzen density estimates, but their efficiency greatly depends on their width. The optimal widths determined using the integrated square error criterion, the harmonic analysis (applicable only to multimodal pdfs with equidistant peaks), and the "test graphs" (the graphs of the second derivatives of the Parzen density estimates that do not assume a knowledge of the "true" pdf, but depend on the distinction between the "essential features" of the pdf and the "random fluctuations") were compared and found to be similar. PMID:9019720
Double sampling to estimate density and population trends in birds
Bart, Jonathan; Earnst, Susan L.
2002-01-01
We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.
Estimation of volumetric breast density for breast cancer risk prediction
NASA Astrophysics Data System (ADS)
Pawluczyk, Olga; Yaffe, Martin J.; Boyd, Norman F.; Jong, Roberta A.
2000-04-01
Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the true, volumetric quantity of dense tissue in the breast. A computerized method to estimate the amount of radiographically dense tissue in the overall volume of the breast has been developed to provide an automatic, user-independent tool for breast cancer risk assessment. The procedure for volumetric density estimation consists of first correcting the image for inhomogeneity, then performing a volume density calculation. First, optical sensitometry is used to convert all images to the logarithm of relative exposure (LRE), in order to simplify the image correction operations. The field non-uniformity correction, which takes into account heel effect, inverse square law, path obliquity and intrinsic field and grid non- uniformity is obtained by imaging a spherical section PMMA phantom. The processed LRE image of the phantom is then used as a correction offset for actual mammograms. From information about the thickness and placement of the breast, as well as the parameters of a breast-like calibration step wedge placed in the mammogram, MD of the breast is calculated. Post processing and a simple calibration phantom enable user- independent, reliable and repeatable volumetric estimation of density in breast-equivalent phantoms. Initial results obtained on known density phantoms show the estimation to vary less than 5% in MD from the actual value. This can be compared to estimated mammographic density differences of 30% between the true and non-corrected values. Since a more simplistic breast density measurement based on the projected area has been shown to be a strong indicator of breast cancer risk (RR equals 4), it is believed that the current volumetric technique will provide an even better indicator. Such an indicator can be used in determination of the method and frequency of breast cancer screening, and might prove useful in measuring the effect of intervention measures such as drug therapy or dietary change on breast cancer risk.
Estimating cosmic velocity fields from density fields and tidal tensors
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan
2012-10-01
In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ?5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1? confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (? > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.
DENSITY AND FAILURE RATE ESTIMATION WITH APPLICATION TO RELIABILITY
Wang, Jane-Ling
DENSITY AND FAILURE RATE ESTIMATION WITH APPLICATION TO RELIABILITY Contribution to the Encyclopedia of Statistics in Quality and Reliability Article ID: eqr449 February 26, 2007 Hans-Georg M of failures in reliability and quality control. This article focuses on nonparametric approaches
A complex exponential Fourier transform approach to gradient density estimation
Rangarajan, Anand
A complex exponential Fourier transform approach to gradient density estimation Karthik S transformation of a uniformly distributed random variable) defined on a closed, bounded interval R transformation Y = S (X) where X is uniformly distributed] with the normalized power spectrum of exp (i
Practical Bayesian Density Estimation Using Mixtures Of Normals
Kathryn Roeder
1995-01-01
this paper, wepropose some solutions to these problems. Our goal is to come up with a simple, practicalmethod for estimating the density. This is an interesting problem in its own right, as wellas a first step towards solving other inference problems, such as providing more flexibledistributions in hierarchical models.To see why the posterior is improper under the usual reference prior,
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved. PMID:22919919
Analysis of Distributed Algorithms for Density Estimation in VANETs (Poster)
Özkasap, Öznur
, such as pressure pads, inductive loop detec- tor, roadside radar, cameras and wireless sensors, or using to calculate density, our study is inspired by the mechanisms proposed for system size estimation in peer-to-peer of the methods rely on building an infrastructure, such as pressure pads, inductive loop detectors deployed under
Contributed Paper Estimating the Density of Honeybee Colonies across
Paxton, Robert
Contributed Paper Estimating the Density of Honeybee Colonies across Their Natural Range to Fill, University of Pretoria, Pretoria 0002, South Africa §Honeybee Research Section, ARC-Plant Protection Research, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Policy Search via Density Estimation Andrew Y. Ng
Parr, Ronald
Policy Search via Density Estimation Andrew Y. Ng Computer Science Division U.C. Berkeley Berkeley (POMDP). Following several other authors, our approach is based on searching in parameterized families of policies (for example, via gradient descent) to optimize solution qual ity. However, rather than trying
Face Value: Towards Robust Estimates of Snow Leopard Densities
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Face Value: Towards Robust Estimates of Snow Leopard Densities.
Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Extracting galactic structure parameters from multivariated density estimation
NASA Technical Reports Server (NTRS)
Chen, B.; Creze, M.; Robin, A.; Bienayme, O.
1992-01-01
Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.
Wavelet-based analogous phase scintillation index for high latitudes
NASA Astrophysics Data System (ADS)
Ahmed, A.; Tiwari, R.; Strangeways, H. J.; Dlay, S.; Johnsen, M. G.
2015-08-01
The Global Positioning System (GPS) performance at high latitudes can be severely affected by the ionospheric scintillation due to the presence of small-scale time-varying electron density irregularities. In this paper, an improved analogous phase scintillation index derived using the wavelet-transform-based filtering technique is presented to represent the effects of scintillation regionally at European high latitudes. The improved analogous phase index is then compared with the original analogous phase index and the phase scintillation index for performance comparison using 1 year of data from Trondheim, Norway (63.41°N, 10.4°E). This index provides samples at a 1 min rate using raw total electron content (TEC) data at 1 Hz for the prediction of phase scintillation compared to the scintillation monitoring receivers (such as NovAtel Global Navigation Satellite Systems Ionospheric Scintillation and TEC Monitor receivers) which operate at 50 Hz rate and are thus rather computationally intensive. The estimation of phase scintillation effects using high sample rate data makes the improved analogous phase index a suitable candidate which can be used in regional geodetic dual-frequency-based GPS receivers to efficiently update the tracking loop parameters based on tracking jitter variance.
Ionospheric electron density profile estimation using commercial AM broadcast signals
NASA Astrophysics Data System (ADS)
Yu, De; Ma, Hong; Cheng, Li; Li, Yang; Zhang, Yufeng; Chen, Wenjun
2015-08-01
A new method for estimating the bottom electron density profile by using commercial AM broadcast signals as non-cooperative signals is presented in this paper. Without requiring any dedicated transmitters, the required input data are the measured elevation angles of signals transmitted from the known locations of broadcast stations. The input data are inverted for the QPS model parameters depicting the electron density profile of the signal's reflection area by using a probabilistic inversion technique. This method has been validated on synthesized data and used with the real data provided by an HF direction-finding system situated near the city of Wuhan. The estimated parameters obtained by the proposed method have been compared with vertical ionosonde data and have been used to locate the Shijiazhuang broadcast station. The simulation and experimental results indicate that the proposed ionospheric sounding method is feasible for obtaining useful electron density profiles.
An automated approach for estimation of breast density.
Heine, John J; Carston, Michael J; Scott, Christopher G; Brandt, Kathleen R; Wu, Fang-Fang; Pankratz, Vernon Shane; Sellers, Thomas A; Vachon, Celine M
2008-11-01
Breast density is a strong risk factor for breast cancer; however, no standard assessment method exists. An automated breast density method was modified and compared with a semi-automated, user-assisted thresholding method (Cumulus method) and the Breast Imaging Reporting and Data System four-category tissue composition measure for their ability to predict future breast cancer risk. The three estimation methods were evaluated in a matched breast cancer case-control (n = 372 and n = 713, respectively) study at the Mayo Clinic using digitized film mammograms. Mammograms from the craniocaudal view of the noncancerous breast were acquired on average 7 years before diagnosis. Two controls with no previous history of breast cancer from the screening practice were matched to each case on age, number of previous screening mammograms, final screening exam date, menopausal status at this date, interval between earliest and latest available mammograms, and residence. Both Pearson linear correlation (R) and Spearman rank correlation (r) coefficients were used for comparing the three methods as appropriate. Conditional logistic regression was used to estimate the risk for breast cancer (odds ratios and 95% confidence intervals) associated with the quartiles of percent breast density (automated breast density method, Cumulus method) or Breast Imaging Reporting and Data System categories. The area under the receiver operator characteristic curve was estimated and used to compare the discriminatory capabilities of each approach. The continuous measures (automated breast density method and Cumulus method) were highly correlated with each other (R = 0.70) but less with Breast Imaging Reporting and Data System (r = 0.49 for automated breast density method and r = 0.57 for Cumulus method). Risk estimates associated with the lowest to highest quartiles of automated breast density method were greater in magnitude [odds ratios: 1.0 (reference), 2.3, 3.0, 5.2; P trend < 0.001] than the corresponding quartiles for the Cumulus method [odds ratios: 1.0 (reference), 1.7, 2.1, and 3.8; P trend < 0.001] and Breast Imaging Reporting and Data System [odds ratios: 1.0 (reference), 1.6, 1.5, 2.6; P trend < 0.001] method. However, all methods similarly discriminated between case and control status; areas under the receiver operator characteristic curve were 0.64, 0.63, and 0.61 for automated breast density method, Cumulus method, and Breast Imaging Reporting and Data System, respectively. The automated breast density method is a viable option for quantitatively assessing breast density from digitized film mammograms. PMID:18990749
Estimation of Enceladus Plume Density Using Cassini Flight Data
NASA Technical Reports Server (NTRS)
Wang, Eric K.; Lee, Allan Y.
2011-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.
header for SPIE use Diagnostically lossless medical image compression via wavelet-based
Qi, Xiaojun
header for SPIE use Diagnostically lossless medical image compression via wavelet-based background are essential in archival and communication of medical images. In this paper, an automated wavelet contains the entire diagnostic region of the image. Histogram analyses are applied to the non
WAVELET BASED CHARACTERIZATION OF ACOUSTIC ATTENUATION IN POLYMERS USING LAMB WAVE MODES
Boyer, Edmond
WAVELET BASED CHARACTERIZATION OF ACOUSTIC ATTENUATION IN POLYMERS USING LAMB WAVE MODES Rais Ahmad@csun.edu ABSTRACT Polymers have been used in a wide range of applications ranging from fabrication of sophisticated medical equipment to manufacturing aircrafts. The design advantages of using polymers are its high
WAVELET-BASED IMAGE COMPRESSION ANTI-FORENSICS Matthew C. Stamm and K. J. Ray Liu
Liu, K. J. Ray
WAVELET-BASED IMAGE COMPRESSION ANTI-FORENSICS Matthew C. Stamm and K. J. Ray Liu Dept be modified with relative ease, con- siderable effort has been spent developing image forensic algorithms been given to anti-forensic oper- ations designed to mislead forensic techniques. In this paper, we
A Wavelet-Based Approach to Improve the E ciency of Multi-Level Surprise Mining?
Shahabi, Cyrus
ectiveness of our proposed methods, we evaluated our 2D TSA-tree using real and synthetic data. The results data point. In this paper, we show how a wavelet-based data structure, 2D TSA-tree stands for Trend and Surprise Abstractions Tree can be utilized e ciently to detect surprises on spatio-temporal data at di
Wavelet-Based Neural Pattern Analyzer for Behaviorally Significant Burst Pattern Recognition
Bhunia, Swarup
of swallowing behaviors in the animal [3]. The experimental animal in our case is an invertebrate marine molluskWavelet-Based Neural Pattern Analyzer for Behaviorally Significant Burst Pattern Recognition rely on accurately recording neural data from multiple neurons and detecting behaviorally meaningful
Adapted Convex Optimization Algorithm for Wavelet-Based Dynamic PET Reconstruction
Paris-Sud XI, Université de
1 Adapted Convex Optimization Algorithm for Wavelet-Based Dynamic PET Reconstruction Nelly Abstract--This work deals with Dynamic Positron Emission Tomography (PET) data reconstruction, considering. The effectiveness of this approach is shown with simulated dynamic PET data. Comparative results are also provided
WATER RESOURCES RESEARCH, VOL. ???, XXXX, DOI:10.1029/, Wavelet-Based Multiresolution Analysis1
Percival, Don
the profiler). In addition to temperature (the focus33 of this paper), these water quality indicators include p of Wivenhoe Dam Water Temperatures2 D. B. Percival, 1 S. M. Lennox, 2 Y.-G. Wang, 2,3 and R. E. Darnell 2 S. M, LENNOX, WANG AND DARNELL: WAVELET-BASED ANALYSIS OF DAM WATER TEMPERATURES Abstract. Water temperature
ISI/ICI COMPARISON OF DMT AND WAVELET BASED MCM SCHEMES FOR TIMEINVARIANT CHANNELS
Zimmermann, Georg
ISI/ICI COMPARISON OF DMT AND WAVELET BASED MCM SCHEMES FOR TIMEÂINVARIANT CHANNELS Maria Charina environments. Currently used FFT based MCM schemes (DMT) outperform those based on wavelets regardless of which DMT of OFDM is standardized for the asymmetrical transmission over digital subscriber line) systems
A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images
Rajpoot, Nasir
A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images Hammad of samples is an im- portant problem in the domain of histological image classification. This issue is inherent to the field due to the high complexity of histology im- age data. A technique that provides good
Wavelet-based de-noising of positron emission tomography scans
Wolfgang Stefan; K ewei Chen; Hongbin Guo; Svetlana Roudenko
A method to improve the Signal to Noise Ratio of Positron Emission Tomography scans is presented. A wavelet-based image decomposition technique decomposes an image into two parts, one which primarily contains the desired restored image and the other primarily the remaining unwanted portion of the image. Because the method is based on a texture extraction model that identifies the desired
WAVELET-BASED ULTRASOUND IMAGE DENOISING USING AN ALPHA-STABLE PRIOR PROBABILITY MODEL
Tsakalides, Panagiotis
WAVELET-BASED ULTRASOUND IMAGE DENOISING USING AN ALPHA-STABLE PRIOR PROBABILITY MODEL Alin Achim ul- trasound images are best described by alpha-stable distri- butions, a family of heavy the alpha-stable model we develop a noise-removal processor that performs a non-linear opera- tion
Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency
Levent Sendur; Ivan W. Selesnick
2002-01-01
Most simple nonlinear thresholding rules for wavelet-based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. We only consider the dependencies between the coefficients and their parents in detail. For this purpose, new non-Gaussian bivariate distributions are proposed, and corresponding nonlinear threshold functions (shrinkage functions) are derived from the models using Bayesian
Can modeling improve estimation of desert tortoise population densities?
Nussear, K.E.; Tracy, C.R.
2007-01-01
The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.
Some Bayesian statistical techniques useful in estimating frequency and density
Johnson, D.H.
1977-01-01
This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.
Diagnosing osteoporosis: A new perspective on estimating bone density
NASA Astrophysics Data System (ADS)
Cassia-Moura, R.; Ramos, A. D.; Sousa, C. S.; Nascimento, T. A. S.; Valença, M. M.; Coelho, L. C. B. B.; Melo, S. B.
2007-07-01
Osteoporosis may be characterized by low bone density and its significance is expected to grow as the population of the world both increases and ages. Our purpose here is to model human bone mineral density estimated through dual-energy x-ray absorptiometry, using local volumetric distance spline interpolants. Interpolating the values means the construction of a function F(x,y,z) that mimics the relationship implied by the data (xi,yi,zi;fi), in such a way that F(xi,yi,zi)=fi, i=1,2,…,n, where x,y and z represent, respectively, age, weight and height. This strategy greatly enhances the ability to accurately express the patient's bone density measurements, with the potential to become a framework for bone densitometry in clinical practice. The usefulness of our model is demonstrated in 424 patients and the relevance of our results for diagnosing osteoporosis is discussed.
Estimating black bear density using DNA data from hair snares
Gardner, B.; Royle, J.A.; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.
2010-01-01
DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Accurate photometric redshift probability density estimation - method comparison and application
Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-01-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...
Effect of packing density on strain estimation by Fry method
NASA Astrophysics Data System (ADS)
Srivastava, Deepak; Ojha, Arun
2015-04-01
Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a combination of the degree of sorting and the packing density% is more useful proxy for testing the degree of anisotropy and clustering in a point distribution.
Estimation of Volumetric Breast Density from Digital Mammograms
NASA Astrophysics Data System (ADS)
Alonzo-Proulx, Olivier
Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.
Accurate photometric redshift probability density estimation - method comparison and application
NASA Astrophysics Data System (ADS)
Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-10-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ? 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.
Oh, JungHwan
: MPEG-7 Scheme Based Embedded Multimedia Database Management System Conference or Journal Name: Wavelet Based Image Indexing and Retrieval Conference or Journal Name: First International Conference Using Reasoning Services Conference or Journal Name: International Conference on Multimedia Information
Wavelet-based scale-dependent detection of neurological action potentials.
Escolá, Ricardo; Bonnet, Stéphane; Guillemaud, Régis; Magnin, Isabelle
2007-01-01
We study different wavelet-based algorithms for the detection of neurological action potentials recorded using micro-electrode arrays (MEA). We plan to develop a new family of ASIC-embedded low power algorithms close to the recording sites. We use the wavelet theory, not for previous-to-the-detection denoising stage (as it is usually used for) but for the detection itself. Different adaptive methods are presented with varying complexity levels. We demonstrate that wavelet-based detection of extracellular action potentials is superior than traditional and simpler approaches, at the expense of a slightly larger computational load. Moreover, our method is shown to be fully compatible with an embedded implementation. Proposed algorithms are applied to simulated datasets using a simplified model of the American cockroach antennal lobe. PMID:18002350
Adaptive wavelet-based finite-difference modelling of SH -wave propagation
Stéphane Operto; Jean Virieux; Bernhard Hustedt; Fabrizio Malfanti
2002-01-01
An adaptive wavelet-based finite-difference method for 2-D SH -wave propagation modelling is presented. The discrete orthogonal wavelet transform allows the decomposition of spatial wavefield coordinates on to different grids of various resolution. At different times during propagation and locations in the model, the different scales involved in the decomposition give different contributions to the wavefield construction. The orthogonal wavelet basis
Wavelet-based acoustic emission analysis of material fatigue behavior: Bone cement
NASA Astrophysics Data System (ADS)
Ng, Eng-Teik
2000-12-01
A methodology developed in this dissertation, based on the time-frequency analysis of acoustic emission (AE) signal generated by the cyclic loading of bone cement specimens is presented. The discrete wavelet transform is utilized. The advantages of this method are to eliminate the noise from AE signal and provide the multi-resolution analysis. To demonstrate the capability of this proposed method, the Palacos R bone cement is selected as an example. The compact tension specimens are prepared by hand mixing (HM) and vacuum mixing (VM) methods. The AE signal is decomposed into different wavelet levels by the Daubechies' discrete wavelet transform. The D3 and A3 wavelet levels are chosen for wavelet analysis. The frequency spectral in levels D3 and A 3 are 180 kHz and 110 kHz, respectively. Over 90% of the ratio of reconstructed energy to total energy is used to identify noises and insignificant components in the signal. The free noise of AE signal is used to determine the coefficients of the relationship between the wavelet-based AE energy rate, dEdN and the stress intensity factor range, DeltaKI for both HM and VM specimens. Based on the statistical analysis, the results show that the VM method does not significantly effect the slope (tau) of wavelet-based AE model. However, the wavelet-based AE energy rate of VM specimen is one order of magnitude less than that of HM specimen. In other words, the VM method significantly reduces the fatigue crack propagation rate of bone cement. Moreover, the fatigue life prediction model based on the wavelet transform is developed to determine the residual fatigue life of material. In summary, the wavelet-based AE technique can distinguish the difference of intercepts between HM and VM specimens and provide accurate results. Therefore, this proposed method provides an efficient tool to study the fatigue crack propagation behavior of materials.
Model-free stochastic processes studied with q-wavelet-based informational tools
NASA Astrophysics Data System (ADS)
Pérez, D. G.; Zunino, L.; Martín, M. T.; Garavaglia, M.; Plastino, A.; Rosso, O. A.
2007-04-01
We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way, (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Comparative study of different wavelet based neural network models for rainfall-runoff modeling
NASA Astrophysics Data System (ADS)
Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.
2014-07-01
The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.
Estimation of the Space Density of Low Surface Brightness Galaxies
F. H. Briggs
1997-02-24
The space density of low surface brightness and tiny gas-rich dwarf galaxies are estimated for two recent catalogs: The Arecibo Survey of Northern Dwarf and Low Surface Brightness Galaxies (Schneider, Thuan, Magri & Wadiak 1990) and The Catalog of Low Surface Brightness Galaxy, List II (Schombert, Bothun, Schneider & McGaugh 1992). The goals are (1) to evaluate the additions to the completeness of the Fisher and Tully (1981) 10 Mpc Sample and (2) to estimate whether the density of galaxies contained in the new catalogs adds a significant amount of neutral gas mass to the the inventory of HI already identified in the nearby, present-epoch universe. Although tiny dwarf galaxies (M_HI < ~10^7 solar masses) may be the most abundant type of extragalactic stellar system in the nearby Universe, if the new catalogs are representative, the LSB and dwarf populations they contain make only a small addition (<10%) to the total HI content of the local Universe and probably constitute even smaller fractions of its luminous and dynamical mass.
Audit, Benjamin; Baker, Antoine; Chen, Chun-Long; Rappailles, Aurélien; Guilbaud, Guillaume; Julienne, Hanna; Goldar, Arach; d'Aubenton-Carafa, Yves; Hyrien, Olivier; Thermes, Claude; Arneodo, Alain
2013-01-01
In this protocol, we describe the use of the LastWave open-source signal-processing command language (http://perso.ens-lyon.fr/benjamin.audit/LastWave/) for analyzing cellular DNA replication timing profiles. LastWave makes use of a multiscale, wavelet-based signal-processing algorithm that is based on a rigorous theoretical analysis linking timing profiles to fundamental features of the cell's DNA replication program, such as the average replication fork polarity and the difference between replication origin density and termination site density. We describe the flow of signal-processing operations to obtain interactive visual analyses of DNA replication timing profiles. We focus on procedures for exploring the space-scale map of apparent replication speeds to detect peaks in the replication timing profiles that represent preferential replication initiation zones, and for delimiting U-shaped domains in the replication timing profile. In comparison with the generally adopted approach that involves genome segmentation into regions of constant timing separated by timing transition regions, the present protocol enables the recognition of more complex patterns of the spatio-temporal replication program and has a broader range of applications. Completing the full procedure should not take more than 1 h, although learning the basics of the program can take a few hours and achieving full proficiency in the use of the software may take days. PMID:23237832
Density estimation on multivariate censored data with optional Pólya tree
Seok, Junhee; Tian, Lu; Wong, Wing H.
2014-01-01
Analyzing the failure times of multiple events is of interest in many fields. Estimating the joint distribution of the failure times in a non-parametric way is not straightforward because some failure times are often right-censored and only known to be greater than observed follow-up times. Although it has been studied, there is no universally optimal solution for this problem. It is still challenging and important to provide alternatives that may be more suitable than existing ones in specific settings. Related problems of the existing methods are not only limited to infeasible computations, but also include the lack of optimality and possible non-monotonicity of the estimated survival function. In this paper, we proposed a non-parametric Bayesian approach for directly estimating the density function of multivariate survival times, where the prior is constructed based on the optional Pólya tree. We investigated several theoretical aspects of the procedure and derived an efficient iterative algorithm for implementing the Bayesian procedure. The empirical performance of the method was examined via extensive simulation studies. Finally, we presented a detailed analysis using the proposed method on the relationship among organ recovery times in severely injured patients. From the analysis, we suggested interesting medical information that can be further pursued in clinics. PMID:23902636
Nonparametric estimation of multivariate scale mixtures of uniform densities
Pavlides, Marios G.; Wellner, Jon A.
2012-01-01
Suppose that U = (U1, … , Ud) has a Uniform ([0, 1]d) distribution, that Y = (Y1, … , Yd) has the distribution G on R+d, and let X = (X1, … , Xd) = (U1Y1, … , UdYd). The resulting class of distributions of X (as G varies over all distributions on R+d) is called the Scale Mixture of Uniforms class of distributions, and the corresponding class of densities on R+d is denoted by FSMU(d). We study maximum likelihood estimation in the family FSMU(d). We prove existence of the MLE, establish Fenchel characterizations, and prove strong consistency of the almost surely unique maximum likelihood estimator (MLE) in FSMU(d). We also provide an asymptotic minimax lower bound for estimating the functional f ? f(x) under reasonable differentiability assumptions on f ? FSMU(d) in a neighborhood of x. We conclude the paper with discussion, conjectures and open problems pertaining to global and local rates of convergence of the MLE. PMID:22485055
Smallwood, D.O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Consistent Density Estimation from Distribution Functions Malik MagdonIsmail \\Lambda
Magdon-Ismail, Malik
Consistent Density Estimation from Distribution Functions Malik MagdonÂIsmail \\Lambda malik We approach density estimation by way of estimating the distribution function and takÂ ing K ? 1 are continuous and bounded, and let g be the associÂ ated unknown density. Let N samples fx i
Clustering via Mode Seeking by Direct Estimation of the Gradient of a Log-Density
HyvÃ¤rinen, Aapo
Clustering via Mode Seeking by Direct Estimation of the Gradient of a Log-Density Hiroaki Sasaki1 density by identifying the zero points of the density gradient. Since it does not require to fix application fields. A typical implementation of the mean shift is to first estimate the density by kernel
The effectiveness of tape playbacks in estimating Black Rail densities
Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.
1999-01-01
Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys
Atmospheric turbulence mitigation using complex wavelet-based fusion.
Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R
2013-06-01
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359
Estimating Foreign-Object-Debris Density from Photogrammetry Data
NASA Technical Reports Server (NTRS)
Long, Jason; Metzger, Philip; Lane, John
2013-01-01
Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.
Kernel estimate of the spoken language sound multivariate probability density function
NASA Astrophysics Data System (ADS)
Bokal, Zanna M.; Sinitsyn, Rustem B.
2008-01-01
In the paper a new approach for estimating of the spoken language sound multivariate probability density is suggested. It is based on the use of a projection of a random process to the set of random variables, with the probability density defined as a product of two-dimensional densities. The estimates of two-dimensional probability densities are obtained with the help of filtering of the two-dimensional empirical characteristic function. Therefore, we are suggesting a nonparametric estimate of the characteristic function. On the basis of these estimates nonparametric algorithms of sound classification can be constructed. Examples for the sound probability density function estimates are suggested.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S
2015-02-01
This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms. PMID:25698047
NASA Astrophysics Data System (ADS)
Zunino, L.; Pérez, D. G.; Martín, M. T.; Plastino, A.; Garavaglia, M.; Rosso, O. A.
2007-02-01
Efficient tools to characterize stochastic processes are discussed. Quantifiers originally proposed within the framework of information theory, like entropy and statistical complexity, are translated into wavelet language, which renders the above quantifiers into tools that exhibit the important “localization” advantages provided by wavelet theory. Two important and popular stochastic processes, fractional Brownian motion and fractional Gaussian noise, are studied using these wavelet-based informational tools. Exact analytical expressions are obtained for the wavelet probability distribution. Finally, numerical simulations are used to validate our analytical results.
Estimation of density of mongooses with capture-recapture and distance sampling
Corn, J.L.; Conroy, M.J.
1998-01-01
We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.
Online Direct Density-Ratio Estimation Applied to Inlier-Based Outlier Detection.
du Plessis, Marthinus Christoffel; Shiino, Hiroaki; Sugiyama, Masashi
2015-09-01
Many machine learning problems, such as nonstationarity adaptation, outlier detection, dimensionality reduction, and conditional density estimation, can be effectively solved by using the ratio of probability densities. Since the naive two-step procedure of first estimating the probability densities and then taking their ratio performs poorly, methods to directly estimate the density ratio from two sets of samples without density estimation have been extensively studied recently. However, these methods are batch algorithms that use the whole data set to estimate the density ratio, and they are inefficient in the online setup, where training samples are provided sequentially and solutions are updated incrementally without storing previous samples. In this letter, we propose two online density-ratio estimators based on the adaptive regularization of weight vectors. Through experiments on inlier-based outlier detection, we demonstrate the usefulness of the proposed methods. PMID:26161817
Adjusted KNN Model in Estimating User Density in Small Areas with Poor Signal Strength
Greenberg, Albert
density for areas with poor signal strength. However, user density can be estimated from other big dataAdjusted KNN Model in Estimating User Density in Small Areas with Poor Signal Strength Rong Duan collected by telecommunication providers from different sources. This paper is a case study leveraging big
PROBABILITY DENSITY ESTIMATION IN HIGHER DIMENSIONS David W. Scott and James R. Thompson
Scott, David W.
PROBABILITY DENSITY ESTIMATION IN HIGHER DIMENSIONS David W. Scott and James R. Thompson Rice University, Houston, Texas For the estimation of probability densities in dimensions past two of the modes and proceed to describe the unknown density using these as local origins. The scaling system
Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE
Nowak, Robert
1 Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE July 4, 2003 Abstract The nonparametric density estimation method proposed in this paper is computationally fast, capable of detect- ing density discontinuities and singularities at a very high resolution
Density Matrix Estimation in Quantum Homodyne Yazhen Wang and Chenliang Xu
Wang, Yazhen
Density Matrix Estimation in Quantum Homodyne Tomography Yazhen Wang and Chenliang Xu University, and scientists need to learn the systems from experimental data. As density matrices are usually employed to characterize the quantum states of the systems, this paper investigates estimation of density matrices. We
Nonparametric estimation of population density for line transect sampling using FOURIER series
Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.
1979-01-01
A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.
Cumulative Attribute Space for Age and Crowd Density Estimation Ke Chen, Shaogang Gong, Tao Xiang
Huang, Jianwei
Cumulative Attribute Space for Age and Crowd Density Estimation Ke Chen, Shaogang Gong, Tao Xiang vision problems such as human age estimation, crowd density estimation and body/face pose (view angle- tures extracted from sparse and imbalanced image sam- ples are mapped onto a cumulative attribute space
A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data
Peter E. Freeman; Vinay Kashyap; Robert Rosner; Donald Q. Lamb
2001-08-27
Wavelets are scaleable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero. In addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly non-zero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. In this paper, we describe the mission-independent, wavelet-based source detection algorithm WAVDETECT, part of the CIAO software package. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e. flat-fielded) background maps; (2) the correction for exposure variations within the field-of-view; (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the algorithm's robustness by applying it to various images.
Estimation of population ring rates and current source densities from laminar electrode
Einevoll, Gaute T.
Estimation of population ring rates and current source densities from laminar electrode recordings. Key words: local eld potential, LFP, multi-unit activity, MUA, current source density, population ring recordings. In computer experiments extracellular potentials from a synaptically activated population
How Bandwidth Selection Algorithms Impact Exploratory Data Analysis Using Kernel Density Estimation
Harpole, Jared Kenneth
2013-05-31
Exploratory data analysis (EDA) is important, yet often overlooked in the social and behavioral sciences. Graphical analysis of one's data is central to EDA. A viable method of estimating and graphing the underlying density in EDA is kernel density...
A TAILORMADE NONPARAMETRIC DENSITY ESTIMATE DANIEL CARANDO, RICARDO FRAIMAN, AND PABLO GROISMAN
Groisman, Pablo
A TAILORMADE NONPARAMETRIC DENSITY ESTIMATE DANIEL CARANDO, RICARDO FRAIMAN, AND PABLO GROISMAN, RICARDO FRAIMAN, AND PABLO GROISMAN and in particular, it provides a simple strongly consistent estimate
Adu-Gyasi, Dennis; Asante, Kwaku Poku; Newton, Sam; Amoako, Sabastina; Dosoo, David; Ankrah, Love; Adjei, George; Amenga-Etego, Seeba; Owusu-Agyei, Seth
2015-01-01
Introduction. The estimation of malaria parasite density using a microscope heavily relies on White Blood Cells (WBCs) counts. An assumed WBCs count of 8000/µL has been accepted as reasonably accurate in estimating malaria parasite densities due to the challenge to accurately determine WBCs count. Method. The study used 4944 pieces of laboratory data of consented participants of age group less than 5 years. The study compared parasite densities of absolute WBCs, assumed WBCs, and the WBCs reference values in Central Ghana. Ethical approvals were given by three ethics committees. Results. The mean (±SD) WBCs and geometric mean parasite density (GMPD) were 10500/µL (±4.1) and 10644/µL (95% CI 9986/µL to 11346/µL), respectively. The difference in the GMPD compared using absolute WBCs and densities of assumed WBCs was significantly lower. The difference in GMPD obtained with an assumed WBCs count and that of the WBCs reference values for the study area, 10400/µL and 9200/µL for children in different age groups, were not significant. Discussion. Significant errors could result when assumed WBCs count is used to estimate malaria parasite density in children. GMPD generated with WBCs reference values statistically agreed with density from the absolute WBCs. When obtaining absolute WBC is not possible, the reference value can be used to estimate parasite density. PMID:25945279
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS1
Mukherjee, Gourab; Johnstone, Iain M.
2015-01-01
We consider estimating the predictive density under Kullback–Leibler loss in an ?0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates.
Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.
Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M
2015-05-01
Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology. PMID:26236833
Ghuman, Avniel Singh; McDaniel, Jonathan R.; Martin, Alex
2011-01-01
Determining the dynamics of functional connectivity is critical for understanding the brain. Recent functional magnetic resonance imaging (fMRI) studies demonstrate that measuring correlations between brain regions in resting state activity can be used to reveal intrinsic neural networks. To study the oscillatory dynamics that underlie intrinsic functional connectivity between regions requires high temporal resolution measures of electrophysiological brain activity, such as magnetoencephalography (MEG). However, there is a lack of consensus as to the best method for examining connectivity in resting state MEG data. Here we adapted a wavelet-based method for measuring phase-locking with respect to the frequency of neural oscillations. This method employs anatomical MRI information combined with MEG data using the minimum norm estimate inverse solution to produce functional connectivity maps from a “seed” region to all other locations on the cortical surface at any and all frequencies of interest. We test this method by simulating phase-locked oscillations at various points on the cortical surface, which illustrates a substantial artifact that results from imperfections in the inverse solution. We demonstrate that normalizing resting state MEG data using phase-locking values computer on empty room data reduces much of the effects of this artifact. We then use this method with eight subjects to reveal intrinsic interhemispheric connectivity in the auditory network in the alpha frequency band in a silent environment. This spectral resting-state functional connectivity imaging method may allow us to better understand the oscillatory dynamics underlying intrinsic functional connectivity in the human brain. PMID:21256967
Morris, Jeffrey S.; Arroyo, Cassandra; Coull, Brent A.; Ryan, Louise M.; Herrick, Richard; Gortmaker, Steven L.
2008-01-01
Summary We present a case study illustrating the challenges of analyzing accelerometer data taken from a sample of children participating in an intervention study designed to increase physical activity. An accelerometer is a small device worn on the hip that records the minute-by-minute activity levels of the child throughout the day for each day it is worn. The resulting data are irregular functions characterized by many peaks representing short bursts of intense activity. We model these data using the wavelet-based functional mixed model. This approach incorporates multiple fixed effect and random effect functions of arbitrary form, the estimates of which are adaptively regularized using wavelet shrinkage. The method yields posterior samples for all functional quantities of the model, which can be used to perform various types of Bayesian inference and prediction. In our case study, a high proportion of the daily activity profiles are incomplete, i.e. have some portion of the profile missing, so cannot be directly modeled using the previously described method. We present a new method for stochastically imputing the missing data that allows us to incorporate these incomplete profiles in our analysis. Our approach borrows strength from both the observed measurements within the incomplete profiles and from other profiles, from the same child as well as other children with similar covariate levels, while appropriately propagating the uncertainty of the imputation throughout all subsequent inference. We apply this method to our case study, revealing some interesting insights into children's activity patterns. We point out some strengths and limitations of using this approach to analyze accelerometer data. PMID:19169424
Lyons, Michael J.
Comparison Between Geometry-Based and Gabor-Wavelets-Based Facial Expression Recognition Using of feature expressions. 1. Introduction There are a number of difficulties in facial expression recognition a small amount of work on facial expression recognition. The first category of previous work uses image
Xiong Xiaobing; Zhao Eryuan
1996-01-01
This paper presents a new method for the design of the phase response of an IIR all pass filter. The fast algorithm, which is based on the least square error criterion, is derived, furthermore, its application is further discussed in the fields of biorthogonal filter banks and wavelets bases. A novel framework for a new class of two channel biorthogonal
École Normale Supérieure
and wavelet-based coherent vortex extraction Frank G. Jacobitz,1,2 Lukas Liechtenstein,2 Kai Schneider,2 and orientation. Coherent vortex extraction, based on the orthogonal wavelet decomposition of vorticity are characterized by a high shear rate. Such high shear rates lead to the predomi- nance of linear effects and make
Effects of LiDAR point density and landscape context on estimates of urban forest biomass
NASA Astrophysics Data System (ADS)
Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.
2015-03-01
Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.
On the estimation of dynamic mass density of random composites.
Jin, Congrui
2012-08-01
The dynamic effective mass density and bulk modulus of an inhomogeneous medium at low frequency limit are discussed. Random configurations in a variety of two-dimensional physical contexts are considered. In each case, effective dynamic mass density and bulk modulus are calculated based on eigenmode matching theory. The results agree with those provided by Martin et al. [J. Acoust. Soc. Am. 128, 571-577 (2010)] obtained from effective wavenumber method. PMID:22894183
Techniques and Technology Article Road-Based Surveys for Estimating Wild Turkey Density
Wallace, Mark C.
Techniques and Technology Article Road-Based Surveys for Estimating Wild Turkey Density-transectbased distance sampling has been used to estimate density of several wild bird species including wild turkeys2005 at 3 study sites in the Texas Rolling Plains, USA, to simulate Rio Grande wild turkey (M. g. intermedia
Estimating cetacean density from passive acoustic arrays Tiago A. Marques and Len Thomas
Marques, Tiago A.
Estimating cetacean density from passive acoustic arrays Tiago A. Marques and Len Thomas Centre is a fundamental requirement for proper management and impact assessment of cetacean populations. The most widely used methods to estimate cetacean density are based on distance sampling theory (Buckland et al. 2001
empec. Vol. 13,1988, page 209-222 Bayes Prediction Density and Regression Estimation
Jammalamadaka, S. Rao
empec. Vol. 13,1988, page 209-222 Bayes Prediction Density and Regression Estimation with the Bayes estimation of an arbitrary multivariate density, fix), X e ^*. Such an f(x) may be represented /?'') is chosen according to a mixing distribution G. We consider the semiparametric Bayes approach in which G
Characterization of a maximum-likelihood nonparametric density estimator of kernel type
NASA Technical Reports Server (NTRS)
Geman, S.; Mcclure, D. E.
1982-01-01
Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).
Fully Nonparametric Probability Density Function Estimation with Finite Gaussian Mixture Models
Verleysen, Michel
Fully Nonparametric Probability Density Function Estimation with Finite Gaussian Mixture Models C-1348 Louvain-la-Neuve Belgium archambeau@dice.ucl.ac.be Abstract Flexible and reliable probability density estimation is fundamental in unsupervised learning and classification. Finite Gaussian mixture
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Bayesian Analysis of Mass Spectrometry Proteomics Data using Wavelet Based Functional Mixed Models
Morris, Jeffrey S.; Brown, Philip J.; Herrick, Richard C.; Baggerly, Keith A.; Coombes, Kevin R.
2008-01-01
In this paper, we analyze MALDI-TOF mass spectrometry proteomic data using Bayesian wavelet-based functional mixed models. By modeling mass spectra as functions, this approach avoids reliance on peak detection methods. The flexibility of this framework in modeling non-parametric fixed and random effect functions enables it to model the effects of multiple factors simultaneously, allowing one to perform inference on multiple factors of interest using the same model fit, while adjusting for clinical or experimental covariates that may affect both the intensities and locations of peaks in the spectra. From the model output, we identify spectral regions that are differentially expressed across experimental conditions, while controlling the Bayesian FDR, in a way that takes both statistical and clinical significance into account. We apply this method to two cancer studies. PMID:17888041
A Haar-wavelet-based Lucy-Richardson algorithm for positron emission tomography image restoration
NASA Astrophysics Data System (ADS)
Tam, Naomi W. P.; Lee, Jhih-Shian; Hu, Chi-Min; Liu, Ren-Shyan; Chen, Jyh-Cheng
2011-08-01
Deconvolution is an ill-posed problem that requires regularization. Noise would inevitably be enhanced during the iterative deconvolution process. The enhanced noise degrades the image quality, causing mistakes in clinical interpretations. This paper introduced a Haar-wavelet-based Lucy-Richardson algorithm (HALU) for positron emission tomography (PET) image restoration based on a spatially variant point spread function. After wavelet decomposition, Lucy-Richardson algorithm was applied to each approximation matrix with different iteration numbers. Thus, this enhanced the contrasts of our images without amplifying much of the noise level. The results showed that HALU can be able to recover the resolution and yield better contrast and lower noise level than the Lucy-Richardson algorithm.
Wavelet-based correlations of impedance cardiography signals and heart rate variability
NASA Astrophysics Data System (ADS)
Podtaev, Sergey; Dumler, Andrew; Stepanov, Rodion; Frick, Peter; Tziberkin, Kirill
2010-04-01
The wavelet-based correlation analysis is employed to study impedance cardiography signals (variation in the impedance of the thorax z(t) and time derivative of the thoracic impedance (- dz/dt)) and heart rate variability (HRV). A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. The modulus of wavelet-correlation function shows the level of correlation, and the phase indicates the mean phase shift of oscillations at the given scale (frequency). Significant correlations essentially exceeding the values obtained for noise signals are defined within two spectral ranges, which correspond to respiratory activity (0.14-0.5 Hz), endothelial related metabolic activity and neuroendocrine rhythms (0.0095-0.02 Hz). Probably, the phase shift of oscillations in all frequency ranges is related to the peculiarities of parasympathetic and neuro-humoral regulation of a cardiovascular system.
Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis
Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun
2014-01-01
To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673
An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report
Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.
1998-11-01
The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.
Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices
NASA Astrophysics Data System (ADS)
Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.
2013-12-01
The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.
Design of wavelet-based ECG detector for implantable cardiac pacemakers.
Min, Young-Jae; Kim, Hoon-Ki; Kang, Yu-Ri; Kim, Gil-Su; Park, Jongsun; Kim, Soo-Won
2013-08-01
A wavelet Electrocardiogram (ECG) detector for low-power implantable cardiac pacemakers is presented in this paper. The proposed wavelet-based ECG detector consists of a wavelet decomposer with wavelet filter banks, a QRS complex detector of hypothesis testing with wavelet-demodulated ECG signals, and a noise detector with zero-crossing points. In order to achieve high detection accuracy with low power consumption, a multi-scaled product algorithm and soft-threshold algorithm are efficiently exploited in our ECG detector implementation. Our algorithmic and architectural level approaches have been implemented and fabricated in a standard 0.35 ?m CMOS technology. The testchip including a low-power analog-to-digital converter (ADC) shows a low detection error-rate of 0.196% and low power consumption of 19.02 ?W with a 3 V supply voltage. PMID:23893202
Sepehrband, Farshid; Clark, Kristi A; Ullmann, Jeremy F P; Kurniawan, Nyoman D; Leanage, Gayeshika; Reutens, David C; Yang, Zhengyi
2015-09-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intracellular and intraneurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different subregions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (diffusion MRI: 42?±?6%, 36?±?4%, and 43?±?5%; electron microscopy: 41?±?10%, 36?±?8%, and 44?±?12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. Hum Brain Mapp 36:3687-3702, 2015. © 2015 Wiley Periodicals, Inc. PMID:26096639
Posterior Density Estimation for a Class of On-line Quality Control Models
NASA Astrophysics Data System (ADS)
Dorea, Chang C. Y.; Santos, Walter B.
2011-11-01
On-line quality control during production calls for a periodical monitoring of the produced items according to some prescribed strategy. It is reasonable to assume the existence of internal non-observable variables so that the carried out monitoring is only partially reliable. Under the setting of a Hidden Markov Model (HMM), posterior density estimates are obtained via particle filter type algorithms. Making use of kernel density methods the stable regime densities are approximated and false-alarm probabilities are estimated.
Horowitz, Roberto
Traffic Density Estimation with the Cell Transmission Model1 Laura Muñoz, Xiaotian Sun, Roberto of traffic densities at unmonitored locations along a highway. The SMM is a hybrid system that switches among, the observability and controllability properties of the SMM modes have been determined. Both the SMM and a density
Banerjee, Arunava
1 Probability Density Estimation using Isocontours and Isosurfaces: Application to Information the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels, and assume a piecewise-continuous representation. The probability density can then be regarded
A generalized single linkage method for estimating the cluster tree of a density
A generalized single linkage method for estimating the cluster tree of a density Werner Stuetzle that the observations may be regarded as a sample from some underlying density in feature space and that groups correspond to modes of this density. The goal then is to find the modes and assign each observation
Electrical Density Sorting and Estimation of Soluble Solids Content of Watermelon
Koro Kato
1997-01-01
The relationship between density and internal quality of watermelon was investigated. The density of watermelon was found to be related both to the degree of hollowness and the soluble solids content which can be used as a measure of sweetness. The soluble solids content of watermelons can be estimated from density and mass by multiple regression analysis. An optimum range
BLACK AND BROWN BEAR DENSITY ESTIMATES USING MODIFIED CAPTURE RECAPTURE TECHNIQUES IN ALASKA
STERLING D. MILLER; EARL F. BECKER; WARREN B. BALLARD
Population density estimates were obtained for sympatric black bear (Ursus americanus) and brown bear (U. arctos) populations inhabiting a search area of 1,325 km2 in south-central Alaska. Standard capture-recapture population estimation techniques were modified to correct for lack of geographic closure based on daily locations of radio-marked animals over a 7-day period. Calculated density estimates were based on available habitat
DISTANCE TRANSFORM GRADIENT DENSITY ESTIMATION USING THE STATIONARY PHASE APPROXIMATION
Rangarajan, Anand
. GURUMOORTHY AND ANAND RANGARAJAN Abstract. The complex wave representation (CWR) converts unsigned 2D distance of the normalized power spectrum (squared magnitude of the Fourier transform) of the wave function to the density transforms into their corresponding wave functions. Here, the distance transform S(X) appears as the phase
An Evaluation of the Accuracy of Kernel Density Estimators for Home Range Analysis
D. Erran Seaman; Roger A. Powell
2008-01-01
Abstract. Kernel density estimators are becoming more widely used, particularly as home range estimators. Despite extensive interest in their theoretical properties, little em- pirical research,has been,done,to investigate,their performance,as home,range estimators. We used,computer,simulations,to compare,the area and shape,of kernel density estimates to the true area and shape,of multimodal,two-dimensional,distributions. The fixed kernel gave,area estimates,with very little bias when,least squares,cross validation was,used to select
The Root-Unroot Algorithm for Density Estimation as Implemented via Wavelet Block Thresholding
Brown, Lawrence D.
The Root-Unroot Algorithm for Density Estimation as Implemented via Wavelet Block Thresholding, and by then applying a suitable form of root transformation to the binned data counts. In principle many common block thresholding estimator in this paper. Finally, the estimated regression function is un-rooted
Hathorn, Bryan C.
Estimation of Vibrational Frequencies and Vibrational Densities of States in Isotopically frequencies, and the primes represent the isotopically substituted species. Equation 3 provides an estimate of the product of the vibrational frequencies. However, the relationship fails to provide an estimate of the zero
Contemporary Mathematics Integrated density of states and Wegner estimates for
principialmente en la densi- dad integrada de estados (IDS). Primero presentamos una prueba de la existen- cia de averaging of the trace of the spectral projection 139 4.2. Improved volume estimate 142 4.3. Sparse averaging of projections 156 5.4. Completion of the proof of Theorem 5.0.1 158 5.5. Single site potentials
Ferguson, Thomas S.
Improving Density Estimation by Incorporating Spatial Information Laura M. Smith Matthew S. Keegan Angeles November 30, 2009 Abstract Given discrete event data, we wish to produce a probability density of density estimation, such as Kernel Density Estimation, do not incorporate geographical information. Using
The importance of spatial models for estimating the strength of density dependence.
Thorson, James T; Skaug, Hans J; Kristensen, Kasper; Shelton, Andrew O; Ward, Eric J; Harms, John H; Benante, James A
2015-05-01
Identifying the existence and magnitude of density dependence is one of the oldest concerns in ecology. Ecologists have aimed to estimate density dependence in population and community data by fitting a simple autoregressive (Gompertz) model for density dependence to time series of abundance for an entire population. However, it is increasingly recognized that spatial heterogeneity in population densities has implications for population and community dynamics. We therefore adapt the Gompertz model to approximate, local densities over continuous space instead of population-wide abundance, and allow productivity to vary spatially using Gaussian random fields. We then show that the conventional (nonspatial) Gompertz model can result in biased estimates of density dependence (e.g., identifying oscillatory dynamics when not present) if densities vary spatially. By contrast, the spatial Gompertz model provides accurate and precise estimates of density dependence for a variety of simulation scenarios and data availabilities. These results are corroborated when comparing spatial and nonspatial models for data from 10 years and -100 sampling stations for three long-lived rockfishes (Sebastes spp.) off the California, USA coast. In this case, the nonspatial model estimates implausible oscillatory dynamics on an annual time scale, while the spatial model estimates strong autocorrelation and is supported by model selection tools. We conclude by discussing the importance of improved data archiving techniques, so that spatial models can be used to reexamine classic questions regarding the existence and magnitude of density. dependence in wild populations. PMID:26236835
The estimation of the gradient of a density function, with applications in pattern recognition
KEINOSUKE FUKUNAGA; LARRY D. HOSTETLER
1975-01-01
Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in ak-nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality
RADIATION PRESSURE DETECTION AND DENSITY ESTIMATE FOR 2011 MD
Micheli, Marco; Tholen, David J.; Elliott, Garrett T. E-mail: tholen@ifa.hawaii.edu
2014-06-10
We present our astrometric observations of the small near-Earth object 2011 MD (H ? 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5?. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of ?=(640±330)kg m{sup ?3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.
A preliminary density estimate for Andean bear using camera-trapping methods
Boris Ríos-Uzeda; Humberto Gómez; Robert B. Wallace
2007-01-01
Andean (spectacled) bears (Tremarctos ornatus) are threatened across most of their range in the Andes. To date no field-based density estimations are available for this species. We present a preliminary estimate of the density of this species in the Greater Madidi Landscape using standard camera-trapping methods and capture-recapture analysis. We photographed 3 individually recogniz- able Andean bears in a 17.6
Density Estimation with Confidence Sets Exemplified by Superclusters and Voids in the Galaxies
Kathryn Roeder
1990-01-01
A method is presented for forming both a point estimate and a confidence set of semiparametric densities. The final product is a three-dimensional figure that displays a selection of density estimates for a plausible range of smoothing parameters. The boundaries of the smoothing parameter are determined by a nonparametric goodness-of-fit test that is based on the sample spacings. For each
Distance Transform Gradient Density Estimation using the Stationary Phase Approximation
Gurumoorthy, Karthik S
2011-01-01
The complex wave representation (CWR) converts unsigned 2D distance transforms into their corresponding wave functions. Here, the distance transform S(X) appears as the phase of the wave function \\phi(X)---specifically, \\phi(X)=exp(iS(X)/\\tau where \\tau is a free parameter. In this work, we prove a novel result using the higher-order stationary phase approximation: we show convergence of the normalized power spectrum (squared magnitude of the Fourier transform) of the wave function to the density function of the distance transform gradients as the free parameter \\tau-->0. In colloquial terms, spatial frequencies are gradient histogram bins. Since the distance transform gradients have only orientation information (as their magnitudes are identically equal to one almost everywhere), as \\tau-->0, the 2D Fourier transform values mainly lie on the unit circle in the spatial frequency domain. The proof of the result involves standard integration techniques and requires proper ordering of limits. Our mathematical re...
Yumin Zhang; Qing-Guo Wang; Kai-Yew Lum
2008-01-01
In this paper, a fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time
In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...
The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition
Hays, Graeme
The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition Thomas K scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb of these low energy densities for species feeding on jellyfish are discussed. © 2007 Elsevier B.V. All rights
Thomas, Len
Estimating cetacean population density using fixed passive acoustic sensors: An example/density of cetacean populations using data from a set of fixed passive acoustic sensors. The methods convert Cetaceans whales and dolphins form a key part of ma- rine ecosystems, and yet many species are potentially
Density Dependence in Time Series Observations of Natural Populations: Estimation and Testing
Steury, Todd D.
hypothesis is that the population is undergoing stochastic exponential growth. stochastic exponential declineDensity Dependence in Time Series Observations of Natural Populations: Estimation and Testing Brian statistical test for detecting density dependence in uni- variate time series observations of population
Assessment of probability density estimation methods: Parzen window and Finite Gaussian Mixtures
Verleysen, Michel
Catholique de Louvain Louvain-la-Neuve, Belgium Verleysen@dice.ucl.ac.be Abstract--Probability DensityAssessment of probability density estimation methods: Parzen window and Finite Gaussian Mixtures C. Archambeau DICE Université Catholique de Louvain Louvain-la-Neuve, Belgium archambeau@dice.ucl.ac.be A
DENSITY ESTIMATES FOR MINIMAL SURFACES AND SURFACES FLOWING BY MEAN CURVATURE
Ciocan-Fontanine, Ionut
DENSITY ESTIMATES FOR MINIMAL SURFACES AND SURFACES FLOWING BY MEAN CURVATURE ROBERT GULLIVER; passing through a point p 2 M (the density of #6; at p) will be bounded by geometric measures. Since selfintersections are unrealistic for such physical contexts as soap films or bi ological
DENSITY ESTIMATES FOR MINIMAL SURFACES AND SURFACES FLOWING BY MEAN CURVATURE
Ciocan-Fontanine, Ionut
DENSITY ESTIMATES FOR MINIMAL SURFACES AND SURFACES FLOWING BY MEAN CURVATURE ROBERT GULLIVER through a point ¥§¦¨¡ (the density of at ¥ ) will be bounded by geometric measures of the complexity-intersections are unrealistic for such physical contexts as soap films or bi- ological membranes, the question of whether
Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh
2015-09-01
Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients. PMID:26126613
Matrix-free application of Hamiltonian operators in Coifman wavelet bases
NASA Astrophysics Data System (ADS)
Acevedo, Ramiro; Lombardini, Richard; Johnson, Bruce R.
2010-06-01
A means of evaluating the action of Hamiltonian operators on functions expanded in orthogonal compact support wavelet bases is developed, avoiding the direct construction and storage of operator matrices that complicate extension to coupled multidimensional quantum applications. Application of a potential energy operator is accomplished by simple multiplication of the two sets of expansion coefficients without any convolution. The errors of this coefficient product approximation are quantified and lead to use of particular generalized coiflet bases, derived here, that maximize the number of moment conditions satisfied by the scaling function. This is at the expense of the number of vanishing moments of the wavelet function (approximation order), which appears to be a disadvantage but is shown surmountable. In particular, application of the kinetic energy operator, which is accomplished through the use of one-dimensional (1D) [or at most two-dimensional (2D)] differentiation filters, then degrades in accuracy if the standard choice is made. However, it is determined that use of high-order finite-difference filters yields strongly reduced absolute errors. Eigensolvers that ordinarily use only matrix-vector multiplications, such as the Lanczos algorithm, can then be used with this more efficient procedure. Applications are made to anharmonic vibrational problems: a 1D Morse oscillator, a 2D model of proton transfer, and three-dimensional vibrations of nitrosyl chloride on a global potential energy surface.
A real-time wavelet-based video decoder using SIMD technology
NASA Astrophysics Data System (ADS)
Klepko, Robert; Wang, Demin
2008-02-01
This paper presents a fast implementation of a wavelet-based video codec. The codec consists of motion-compensated temporal filtering (MCTF), 2-D spatial wavelet transform, and SPIHT for wavelet coefficient coding. It offers compression efficiency that is competitive to H.264. The codec is implemented in software running on a general purpose PC, using C programming language and streaming SIMD extensions intrinsics, without assembly language. This high-level software implementation allows the codec to be portable to other general-purpose computing platforms. Testing with a Pentium 4 HT at 3.6GHz (running under Linux and using the GCC compiler, version 4), shows that the software decoder is able to decode 4CIF video in real-time, over 2 times faster than software written only in C language. This paper describes the structure of the codec, the fast algorithms chosen for the most computationally intensive elements in the codec, and the use of SIMD to implement these algorithms.
Performance evaluation of wavelet-based face verification on a PDA recorded database
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2006-05-01
The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.
A study on discrete wavelet-based noise removal from EEG signals.
Asaduzzaman, K; Reaz, M B I; Mohd-Yasin, F; Sim, K S; Hussain, M S
2010-01-01
Electroencephalogram (EEG) serves as an extremely valuable tool for clinicians and researchers to study the activity of the brain in a non-invasive manner. It has long been used for the diagnosis of various central nervous system disorders like seizures, epilepsy, and brain damage and for categorizing sleep stages in patients. The artifacts caused by various factors such as Electrooculogram (EOG), eye blink, and Electromyogram (EMG) in EEG signal increases the difficulty in analyzing them. Discrete wavelet transform has been applied in this research for removing noise from the EEG signal. The effectiveness of the noise removal is quantitatively measured using Root Mean Square (RMS) Difference. This paper reports on the effectiveness of wavelet transform applied to the EEG signal as a means of removing noise to retrieve important information related to both healthy and epileptic patients. Wavelet-based noise removal on the EEG signal of both healthy and epileptic subjects was performed using four discrete wavelet functions. With the appropriate choice of the wavelet function (WF), it is possible to remove noise effectively to analyze EEG significantly. Result of this study shows that WF Daubechies 8 (db8) provides the best noise removal from the raw EEG signal of healthy patients, while WF orthogonal Meyer does the same for epileptic patients. This algorithm is intended for FPGA implementation of portable biomedical equipments to detect different brain state in different circumstances. PMID:20865544
A new approach to pre-processing digital image for wavelet-based watermark
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido
2008-11-01
The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.
Kim, Byung S; Yoo, Sun K
2007-09-01
The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824
Wavelet-based decomposition and analysis of structural patterns in astronomical images
NASA Astrophysics Data System (ADS)
Mertens, Florent; Lobanov, Andrei
2015-02-01
Context. Images of spatially resolved astrophysical objects contain a wealth of morphological and dynamical information, and effectively extracting this information is of paramount importance for understanding the physics and evolution of these objects. The algorithms and methods currently employed for this purpose (such as Gaussian model fitting) often use simplified approaches to describe the structure of resolved objects. Aims: Automated (unsupervised) methods for structure decomposition and tracking of structural patterns are needed for this purpose to be able to treat the complexity of structure and large amounts of data involved. Methods: We developed a new wavelet-based image segmentation and evaluation (WISE) method for multiscale decomposition, segmentation, and tracking of structural patterns in astronomical images. Results: The method was tested against simulated images of relativistic jets and applied to data from long-term monitoring of parsec-scale radio jets in 3C 273 and 3C 120. Working at its coarsest resolution, WISE reproduces the previous results of a model-fitting evaluation of the structure and kinematics in these jets exceptionally well. Extending the WISE structure analysis to fine scales provides the first robust measurements of two-dimensional velocity fields in these jets and indicates that the velocity fields probably reflect the evolution of Kelvin-Helmholtz instabilities that develop in the flow.
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Estimating beaked whale density from single hydrophones by means of propagation modeling
Thomas, Len
Estimating beaked whale density from single hydrophones by means of propagation modeling Elizabeth Warfare Center) #12;Outline Overview of DECAF project Blainville's beaked whales Study area and available of DECAF project Blainville's beaked whales Study area and available acoustic data How do we estimate
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
A family of non-parametric density estimation algorithms E. G. TABAK
Tabak, Esteban G.
Introduction A central problem in the analysis of data is density estimation: given a set of independent observations xj, j = 1,...,m, estimate its underlying probability distribution. This article is concerned is a family (x;) of Gaussian mixtures, with including free parameters in the means and covariance matrices
A NEW CLASS OF ENTROPY ESTIMATORS FOR MULTI-DIMENSIONAL DENSITIES Erik G. Miller
Massachusetts at Amherst, University of
Processing, 2003 ABSTRACT We present a new class of estimators for approximating the entropy of multiA NEW CLASS OF ENTROPY ESTIMATORS FOR MULTI-DIMENSIONAL DENSITIES Erik G. Miller EECS Department, UC Berkeley Berkeley, CA 94720, USA International Conference on Acoustics, Speech, and Signal
Technology Transfer Automated Retrieval System (TEKTRAN)
Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...
Nonparametric maximum likelihood estimation of probability densities by penalty function methods
NASA Technical Reports Server (NTRS)
Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.
1974-01-01
When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.
Estimations of bulk geometrically necessary dislocation density using high resolution EBSD.
Ruggles, T J; Fullwood, D T
2013-10-01
Characterizing the content of geometrically necessary dislocations (GNDs) in crystalline materials is crucial to understanding plasticity. Electron backscatter diffraction (EBSD) effectively recovers local crystal orientation, which is used to estimate the lattice distortion, components of the Nye dislocation density tensor (?), and subsequently the local bulk GND density of a material. This paper presents a complementary estimate of bulk GND density using measurements of local lattice curvature and strain gradients from more recent high resolution EBSD (HR-EBSD) methods. A continuum adaptation of classical equations for the distortion around a dislocation are developed and used to simulate random GND fields to validate the various available approximations of GND content. PMID:23751207
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. PMID:21682386
Sea ice density estimation in the Bohai Sea using the hyperspectral remote sensing technology
NASA Astrophysics Data System (ADS)
Liu, Chengyu; Shao, Honglan; Xie, Feng; Wang, Jianyu
2014-11-01
Sea ice density is one of the significant physical properties of sea ice and the input parameters in the estimation of the engineering mechanical strength and aerodynamic drag coefficients; also it is an important indicator of the ice age. The sea ice in the Bohai Sea is a solid, liquid and gas-phase mixture composed of pure ice, brine pockets and bubbles, the density of which is mainly affected by the amount of brine pockets and bubbles. The more the contained brine pockets, the greater the sea ice density; the more the contained bubbles, the smaller the sea ice density. The reflectance spectrum in 350~2500 nm and density of sea ice of different thickness and ages were measured in the Liaodong Bay of the Bohai Sea during the glacial maximum in the winter of 2012-2013. According to the measured sea ice density and reflectance spectrum, the characteristic bands that can reflect the sea ice density variation were found, and the sea ice density spectrum index (SIDSI) of the sea ice in the Bohai Sea was constructed. The inversion model of sea ice density in the Bohai Sea which refers to the layer from surface to the depth of penetration by the light was proposed at last. The sea ice density in the Bohai Sea was estimated using the proposed model from Hyperion image which is a hyperspectral image. The results show that the error of the sea ice density inversion model is about 0.0004 g•cm-3. The sea ice density can be estimated through hyperspectral remote sensing images, which provide the data support to the related marine science research and application.
White, Neil A; Engeman, Richard M; Sugihara, Robert T; Krupa, Heather W
2008-01-01
Background Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field. PMID:18416853
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Effect of compression paddle tilt correction on volumetric breast density estimation
NASA Astrophysics Data System (ADS)
Kallenberg, Michiel G. J.; van Gils, Carla H.; Lokate, Mariëtte; den Heeten, Gerard J.; Karssemeijer, Nico
2012-08-01
For the acquisition of a mammogram, a breast is compressed between a compression paddle and a support table. When compression is applied with a flexible compression paddle, the upper plate may be tilted, which results in variation in breast thickness from the chest wall to the breast margin. Paddle tilt has been recognized as a major problem in volumetric breast density estimation methods. In previous work, we developed a fully automatic method to correct the image for the effect of compression paddle tilt. In this study, we investigated in three experiments the effect of paddle tilt and its correction on volumetric breast density estimation. Results showed that paddle tilt considerably affected accuracy of volumetric breast density estimation, but that effect could be reduced by tilt correction. By applying tilt correction, a significant increase in correspondence between mammographic density estimates and measurements on MRI was established. We argue that in volumetric breast density estimation, tilt correction is both feasible and essential when mammographic images are acquired with a flexible compression paddle.
Badenhausser, I; Amouroux, P; Bretagnolle, V
2007-12-01
Sampling methods to estimate acridid density per surface area unit in grassland habitats were compared using presence-absence data and count data. Sampling plans based on 6 yr of surveys were devised to estimate the density of Chorthippus spp., Euchorthippus spp., and Calliptamus italicus L. These acridids represented >90% of species in the study area. Sampling plans based on count data provided a reasonable tool when densities were >1/m(2) and when the level of precision was 0.20-0.30. A binomial sampling plan can be used to estimate C. italicus density with a level of precision >or=0.28. Sampling characteristics, i.e., estimated mean, actual precision, and sample size, were established on validation data sets with bootstrapping analysis. Sampling costs were also calculated according to density-dependent functions. Comparison between binomial sampling and enumerative sampling of C. italicus showed that binomial sampling required less time than enumerative sampling when densities were
Haque, Ekramul
2013-01-01
Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities with the bed heights. PMID:24804024
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Estimation of tiger densities in India using photographic captures and recaptures
Karanth, U.; Nichols, J.D.
1998-01-01
Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.
An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing
Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.
2014-01-01
A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428
Wavelet-based compression of medical images: filter-bank selection and evaluation.
Saffor, A; bin Ramli, A R; Ng, K H
2003-06-01
Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184
An analytic model of toroidal half-wave oscillations: Implication on plasma density estimates
NASA Astrophysics Data System (ADS)
Bulusu, Jayashree; Sinha, A. K.; Vichare, Geeta
2015-06-01
The developed analytic model for toroidal oscillations under infinitely conducting ionosphere ("Rigid-end") has been extended to "Free-end" case when the conjugate ionospheres are infinitely resistive. The present direct analytic model (DAM) is the only analytic model that provides the field line structures of electric and magnetic field oscillations associated with the "Free-end" toroidal wave for generalized plasma distribution characterized by the power law ? = ?o(ro/r)m, where m is the density index and r is the geocentric distance to the position of interest on the field line. This is important because different regions in the magnetosphere are characterized by different m. Significant improvement over standard WKB solution and an excellent agreement with the numerical exact solution (NES) affirms validity and advancement of DAM. In addition, we estimate the equatorial ion number density (assuming H+ atom as the only species) using DAM, NES, and standard WKB for Rigid-end as well as Free-end case and illustrate their respective implications in computing ion number density. It is seen that WKB method overestimates the equatorial ion density under Rigid-end condition and underestimates the same under Free-end condition. The density estimates through DAM are far more accurate than those computed through WKB. The earlier analytic estimates of ion number density were restricted to m = 6, whereas DAM can account for generalized m while reproducing the density for m = 6 as envisaged by earlier models.
Estimation of optical density of bone tissue radiograms with laser densitometry
NASA Astrophysics Data System (ADS)
Rowinski, Jan; Glinkowski, Wojciech; Glebowski, Pawel
2001-07-01
Bone tissue samples excised from the femoral heads of human were X-rayed together with the aluminum reference standard of density. The radiograms were scanned with the laser densitometer UltroScan XL (Pharmacia). Form the optical density profiles of bone samples the mean optical densities were determined. The optical densities were recalculated into equivalent thickness of the aluminium standard [mm Al]. Inter-measurement reproducibility of optical density determination was found to be very good (SD less than 3% of the mean). Relatively high variability (SD about 13% of the mean) was found for the optical density determination of a single bone sample X-rayed repeatedly. The inter-individual variability, which reflects the variability of bone tissue density between human subjects, was estimated as about 25% (SD as percent of the mean). We concluded that the laser densitometry performed according to our protocol provides the precise estimation of bone tissue density. Therefore, laser densitometry of bone tissue radiograms is potentially useful method for studies of bone in medical research and diagnosis.
A Nested Kernel Density Estimator for Improved Characterization of Precipitation Extremes
NASA Astrophysics Data System (ADS)
Li, C.; Michalak, A. M.
2013-12-01
The number and intensity of short-term precipitation extremes has recently been a topic of much interest, with record-setting events occurring in the United States, Europe, Asia, and Australia. These events show the importance of characterizing the behavior of short-term (daily and sub-daily) precipitation intensity so as to properly understand and predict the occurrence and magnitude of extreme precipitation events. One such characterization method is the use of kernel density estimators, which avoid parametric assumptions, and can therefore uncover complex properties such as multimodality. State-of-the-art kernel density estimators have two major recognized drawbacks, however. The first is that kernel density estimators that use unbounded kernels cannot enforce the fact that precipitation is strictly non-negative, because they are subject to ';probability leakage' at the boundary. The second is that they tend to produce artificially spurious fluctuations in the tail of the distribution. To resolve these problems, we present here a nested transformation kernel density estimator, consisting of one or two transformation steps. The first step corrects the skewness of the precipitation distribution, which is the dominant distributional feature of short-term precipitation. Depending on the complexity of the transformed data, the next step is to determine whether further correction is needed. If indeed, an additional skewness correction or a kurtosis correction is implemented, depending on which of these is the dominant remaining feature. The conventional kernel density estimator is used to estimate the density of the transformed data, which is then back transformed into the original space. We evaluate this method using daily precipitation records from 1,217 stations across the continental United States, and compare its performance with other commonly used nonparametric and parametric methods. The presented method represents an improvement over existing ones in more accurately characterizing the behavior of precipitation extremes without strict parametric assumptions, while also being computationally tractable for large datasets.
G. Russell Warnick; John J. Albers
The accurate quantitation of high density lipo- proteins has recently assumed greater importance in view of studies suggesting their negative correlation with coronary heart disease. High density lipoproteins may be estimated by measuring cholesterol in the plasma frac- tion of d > 1.063 g\\/ml. A more practical approach is the specific precipitation of apolipoprotein B (apoB)-contain- ing lipoproteins by sulfated
Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance
Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.
2014-01-01
Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557
Trap array configuration influences estimates and precision of black bear density and abundance.
Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L
2014-01-01
Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557
Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès
2014-06-01
Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250
Mid-latitude Ionospheric Storms Density Gradients, Winds, and Drifts Estimated from GPS TEC Imaging
NASA Astrophysics Data System (ADS)
Datta-Barua, S.; Bust, G. S.
2012-12-01
Ionospheric storm processes at mid-latitudes stand in stark contrast to the typical quiescent behavior. Storm enhanced density (SED) on the dayside affects continent-sized regions horizontally and are often associated with a plume that extends poleward and upward into the nightside. One proposed cause of this behavior is the sub-auroral polarization stream (SAPS) acting on the SED, and neutral wind effects. The electric field and its effect connecting mid-latitude and polar regions are just beginning to be understood and modeled. Another possible coupling effect is due to neutral winds, particularly those generated at high latitudes by joule heating effects. Of particular interest are electric fields and winds along the boundaries of the SED and plume, because these may be at least partly a cause of sharp horizontal electron density gradients. Thus, it is important to understand what bearing the drifts and winds, and any spatial variations in them (e.g., shear), have on the structure of the enhancement, particularly at its boundaries. Imaging techniques based on GPS TEC play a significant role in study of mid-latitude storm dynamics, particularly at mid-latitudes, where sampling of the ionosphere with ground-based GPS lines of sight is most dense. Ionospheric Data Assimilation 4-Dimensional (IDA4D) is a plasma density estimation algorithm that has been used in a number of scientific investigations over several years. Recently, efforts to estimate drivers of the mid-latitude ionosphere, focusing on electric-field-induced drifts and neutral winds, based on GPS TEC high-resolution imaging have shown promise. Estimating Ionospheric Parameters from Ionospheric Reverse Engineering (EMPIRE) is a tool developed that addresses this kind of investigation. In this work electron density and driver estimates are presented for an ionospheric storm using IDA4D in conjunction with EMPIRE. The IDA4D estimates resolve F-region electron densities at 1-degree resolution at the region of passage of the SED and associated plume. High-resolution imaging is used in conjunction with EMPIRE to deduce the dominant drivers. Starting with a baseline Weimer 2001 electric potential model, adjustments to the Weimer model are estimated for the given storm based on the IDA4D-derived densities to show electric fields associated with the plume. These regional densities and drivers are compared to CHAMP and DMSP data that are proximal for validation. Gradients in electron density are numerically computed over the 1-degree region. These density gradients are correlated with the drift estimates to identify a possible causal relationship in the formation of the boundaries of the SED.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Hierarchical models for estimating density from DNA mark-recapture studies
Gardner, B.; Royle, J.A.; Wegan, M.T.
2009-01-01
Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.
A hierarchical model for estimating density in camera-trap studies
Royle, J.A.; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.
2009-01-01
1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.
Marques, Tiago A; Thomas, Len; Ward, Jessica; DiMarzio, Nancy; Tyack, Peter L
2009-04-01
Methods are developed for estimating the size/density of cetacean populations using data from a set of fixed passive acoustic sensors. The methods convert the number of detected acoustic cues into animal density by accounting for (i) the probability of detecting cues, (ii) the rate at which animals produce cues, and (iii) the proportion of false positive detections. Additional information is often required for estimation of these quantities, for example, from an acoustic tag applied to a sample of animals. Methods are illustrated with a case study: estimation of Blainville's beaked whale density over a 6 day period in spring 2005, using an 82 hydrophone wide-baseline array located in the Tongue of the Ocean, Bahamas. To estimate the required quantities, additional data are used from digital acoustic tags, attached to five whales over 21 deep dives, where cues recorded on some of the dives are associated with those received on the fixed hydrophones. Estimated density was 25.3 or 22.5 animals/1000 km(2), depending on assumptions about false positive detections, with 95% confidence intervals 17.3-36.9 and 15.4-32.9. These methods are potentially applicable to a wide variety of marine and terrestrial species that are hard to survey using conventional visual methods. PMID:19354374
Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density
Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A.
2009-01-01
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f0 = exp ?0 where ?0 is a concave function on ?. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ?) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of Hk, the “lower invelope” of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of ?0 = log f0 at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f0) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values. PMID:19881896
Cluster Mass Estimate and a Cusp of the Mass Density Distribution in Clusters of Galaxies
Makino, N; Makino, Nobuyoshi; Asano, Katsuaki
1999-01-01
We study density cusps in the center of clusters of galaxies to reconcile X-ray mass estimates with gravitational lensing masses. For various mass density models with cusps we compute X-ray surface brightness distribution, and fit them to observations to measure the range of parameters in the density models. The Einstein radii estimated from these density models are compared with Einstein radii derived from the observed arcs for Abell 2163, Abell 2218, and RX J1347.5-1145. The X-ray masses and lensing masses corresponding to these Einstein radii are also compared. While steeper cusps give smaller ratios of lensing mass to X-ray mass, the X-ray surface brightnesses estimated from flatter cusps are better fits to the observations. For Abell 2163 and Abell 2218, although the isothermal sphere with a finite core cannot produce giant arc images, a density model with a central cusp can produce a finite Einstein radius, which is smaller than the observed radii. We find that a total mass density profile which decline...
Estimating probability densities from short samples: A parametric maximum likelihood approach
NASA Astrophysics Data System (ADS)
Dudok de Wit, T.; Floriani, E.
1998-10-01
A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.
Carol M. Woods; David Thissen
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population\\u000a distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative\\u000a to existing procedures that use a normal distribution, or a different functional form, for the population distribution. A\\u000a simulation study shows that the
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Estimation of the density of Buccinum undatum (Gastropoda) off Douglas, Isle of Man
NASA Astrophysics Data System (ADS)
Kideys, A. E.
1993-02-01
The density of the common whelk ( Buccinum undatum L.) off Douglas, Isle of Man, was determined by four methods: (1) pot sampling, (2) diving, (3) mark-recapture experiment, and (4) underwater television. Although the values obtained by these methods were comparable, the last two methods yielded overestimations of Buccinum density. The results from diving survey and from pot sampling showed a good agreement, indicating that pot sampling can be used to determine the density of the common whelk, provided a good estimate of the pot attraction area is available. The range of whelk density between February 1989 and August 1990 resulting from pot sampling was between 0.08 and 0.38 individuals m-2. The temporal fluctuations of the whelk densities are discussed in detail.
Estimating food portions. Influence of unit number, meal type and energy density.
Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A
2013-12-01
Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7-26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled "meal", 4 "drink" and 16 "snack". Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p=0.01). Single unit foods and those labelled as 'meal' or 'beverage' were estimated with greater error than multi-unit and 'snack' foods (p=0.02 and p<0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30-46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948
Identification of the monitoring point density needed to reliably estimate contaminant mass fluxes
NASA Astrophysics Data System (ADS)
Liedl, R.; Liu, S.; Fraser, M.; Barker, J.
2005-12-01
Plume monitoring frequently relies on the evaluation of point-scale measurements of concentration at observation wells which are located at control planes or `fences' perpendicular to groundwater flow. Depth-specific concentration values are used to estimate the total mass flux of individual contaminants through the fence. Results of this approach, which is based on spatial interpolation, obviously depend on the density of the measurement points. Our contribution relates the accurracy of mass flux estimation to the point density and, in particular, allows to identify a minimum point density needed to achieve a specified accurracy. In order to establish this relationship, concentration data from fences installed in the coal tar creosote plume at the Borden site are used. These fences are characterized by a rather high density of about 7 points/m2 and it is reasonable to assume that the true mass flux is obtained with this point density. This mass flux is then compared with results for less dense grids down to about 0.1points/m2. Mass flux estimates obtained for this range of point densities are analyzed by the moving window method in order to reduce purely random fluctuations. For each position of the moving window the mass flux is estimated and the coefficient of variation (CV) is calculated to quantify variablity of the results. Thus, the CV provides a relative measure of accurracy in the estimated fluxes. By applying this approach to the Borden naphthalene plume at different times, it is found that the point density changes from sufficient to insufficient due to the temporally decreasing mass flux. By comparing the results of naphthalene and phenol at the same fence and at the same time, we can see that the same grid density might be sufficient for one compound but not for another. If a rather strict CV criterion of 5% is used, a grid of 7 points/m2 is shown to allow for reliable estimates of the true mass fluxes only in the beginning of plume development when mass fluxes are high. Long-term data exhibit a very high variation being attributed to the decreasing flux and a much denser grid would be required to reflect the decreasing mass flux with the same high accurracy. However, a less strict CV criterion of 50% may be acceptable due to uncertainties generally associated with other hydrogeologic parameters. In this case, a point density between 1 and 2 points/m2 is found to be sufficient for a set of five tested chemicals.
Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.
2008-03-01
Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.
NASA Astrophysics Data System (ADS)
Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini
2014-03-01
Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.
An automatic iris occlusion estimation method based on high-dimensional density estimation.
Li, Yung-Hui; Savvides, Marios
2013-04-01
Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651
Change-point detection in time-series data by relative density-ratio estimation.
Liu, Song; Yamada, Makoto; Collier, Nigel; Sugiyama, Masashi
2013-07-01
The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on artificial and real-world datasets including human-activity sensing, speech, and Twitter messages, we demonstrate the usefulness of the proposed method. PMID:23500502
A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals
NASA Astrophysics Data System (ADS)
Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang
2014-05-01
The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.
Multiscale seismic characterization of marine sediments by using a wavelet-based approach
NASA Astrophysics Data System (ADS)
Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique
2015-04-01
We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lévy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare the results with the multiscale analysis performed on synthetic seismic data based on ground truth measurements.
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...
BINOMIAL SAMPLING TO ESTIMATE CITRUS RUST MITE (ACARI: ERIOPHYIDAE) DENSITIES ON ORANGE FRUIT
Technology Transfer Automated Retrieval System (TEKTRAN)
Binomial sampling based on the proportion of samples infested was investigated as a method for estimating mean densities of citrus rust mites, Phyllocoptruta oleivora (Ashmead) and Aculops pelekassi (Keifer), on oranges. Data for the investigation were obtained by counting the number of motile mites...
Estimating the effect of Earth elasticity and variable water density on tsunami speeds
Tsai, Victor C.
Estimating the effect of Earth elasticity and variable water density on tsunami speeds Victor C; revised 25 December 2012; accepted 7 January 2013; published 13 February 2013. [1] The speed of tsunami comparisons of tsunami arrival times from the 11 March 2011 tsunami suggest, however, that the standard
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PMID:24885339
Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data
NASA Astrophysics Data System (ADS)
Wang, Yun; Lovelace, Geoffrey
2001-12-01
Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy, we assume a flat universe and that the weak energy condition is satisfied, and we allow the density of dark energy, ?X(z), to be an arbitrary function of redshift. Using simulated data from a space-based SN pencil-beam survey, we find that by optimizing the number of parameters used to parameterize the dimensionless dark energy density, f(z)=?X(z)/?X(z=0), we can obtain an unbiased estimate of both f(z) and the fractional matter density of the universe, ?m. A plausible SN pencil-beam survey (with a square degree field of view and for an observational duration of 1 yr) can yield about 2000 SNe Ia with 0<=z<=2. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of ?(mint)=0.16 mag. We find that for such an idealized survey, ?m can be measured to 10% accuracy, and the dark energy density can be estimated to ~20% to z~1.5, and ~20%-40% to z~2, depending on the time dependence of the true dark energy density. Dark energy densities that vary more slowly can be more accurately measured. For the anticipated Supernova/Acceleration Probe (SNAP) mission, ?m can be measured to 14% accuracy, and the dark energy density can be estimated to ~20% to z~1.2. Our results suggest that SNAP may gain much sensitivity to the time dependence of the dark energy density and ?m by devoting more observational time to the central pencil-beam fields to obtain more SNe Ia at z>1.2. We use both a maximum likelihood analysis and a Monte Carlo analysis (when appropriate) to determine the errors of estimated parameters. We find that the Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis.
Robel, G.L.; Fisher, W.L.
1999-01-01
Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.
A likelihood approach to estimating animal density from binary acoustic transects.
Horrocks, Julie; Hamilton, David C; Whitehead, Hal
2011-09-01
We propose an approximate maximum likelihood method for estimating animal density and abundance from binary passive acoustic transects, when both the probability of detection and the range of detection are unknown. The transect survey is purposely designed so that successive data points are dependent, and this dependence is exploited to simultaneously estimate density, range of detection, and probability of detection. The data are assumed to follow a homogeneous Poisson process in space, and a second-order Markov approximation to the likelihood is used. Simulations show that this method has small bias under the assumptions used to derive the likelihood, although it performs better when the probability of detection is close to 1. The effects of violations of these assumptions are also investigated, and the approach is found to be sensitive to spatial trends in density and clustering. The method is illustrated using real acoustic data from a survey of sperm and humpback whales. PMID:21039393
Distributed Density Estimation Based on a Mixture of Factor Analyzers in a Sensor Network.
Wei, Xin; Li, Chunguang; Zhou, Liang; Zhao, Li
2015-01-01
Distributed density estimation in sensor networks has received much attention due to its broad applicability. When encountering high-dimensional observations, a mixture of factor analyzers (MFA) is taken to replace mixture of Gaussians for describing the distributions of observations. In this paper, we study distributed density estimation based on a mixture of factor analyzers. Existing estimation algorithms of the MFA are for the centralized case, which are not suitable for distributed processing in sensor networks. We present distributed density estimation algorithms for the MFA and its extension, the mixture of Student's t-factor analyzers (MtFA). We first define an objective function as the linear combination of local log-likelihoods. Then, we give the derivation process of the distributed estimation algorithms for the MFA and MtFA in details, respectively. In these algorithms, the local sufficient statistics (LSS) are calculated at first and diffused. Then, each node performs a linear combination of the received LSS from nodes in its neighborhood to obtain the combined sufficient statistics (CSS). Parameters of the MFA and the MtFA can be obtained by using the CSS. Finally, we evaluate the performance of these algorithms by numerical simulations and application example. Experimental results validate the promising performance of the proposed algorithms. PMID:26251903
Distributed Density Estimation Based on a Mixture of Factor Analyzers in a Sensor Network
Wei, Xin; Li, Chunguang; Zhou, Liang; Zhao, Li
2015-01-01
Distributed density estimation in sensor networks has received much attention due to its broad applicability. When encountering high-dimensional observations, a mixture of factor analyzers (MFA) is taken to replace mixture of Gaussians for describing the distributions of observations. In this paper, we study distributed density estimation based on a mixture of factor analyzers. Existing estimation algorithms of the MFA are for the centralized case, which are not suitable for distributed processing in sensor networks. We present distributed density estimation algorithms for the MFA and its extension, the mixture of Student’s t-factor analyzers (MtFA). We first define an objective function as the linear combination of local log-likelihoods. Then, we give the derivation process of the distributed estimation algorithms for the MFA and MtFA in details, respectively. In these algorithms, the local sufficient statistics (LSS) are calculated at first and diffused. Then, each node performs a linear combination of the received LSS from nodes in its neighborhood to obtain the combined sufficient statistics (CSS). Parameters of the MFA and the MtFA can be obtained by using the CSS. Finally, we evaluate the performance of these algorithms by numerical simulations and application example. Experimental results validate the promising performance of the proposed algorithms. PMID:26251903
Population density estimated from locations of individuals on a passive detector array
Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.
2009-01-01
The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.
Surface estimates of the Atlantic overturning in density space in an eddy-permitting ocean model
NASA Astrophysics Data System (ADS)
Grist, Jeremy P.; Josey, Simon A.; Marsh, Robert
2012-06-01
A method to estimate the variability of the Atlantic meridional overturning circulation (AMOC) from surface observations is investigated using an eddy-permitting ocean-only model (ORCA-025). The approach is based on the estimate of dense water formation from surface density fluxes. Analysis using 78 years of two repeat forcing model runs reveals that the surface forcing-based estimate accounts for over 60% of the interannual AMOC variability in ?0 coordinates between 37°N and 51°N. The analysis provides correlations between surface-forced and actual overturning that exceed those obtained in an earlier analysis of a coarser resolution-coupled model. Our results indicate that, in accordance with theoretical considerations behind the method, it provides a better estimate of the overturning in density coordinates than in z coordinates in subpolar latitudes. By considering shorter segments of the model run, it is shown that correlations are particularly enhanced by the method's ability to capture large decadal scale AMOC fluctuations. The inclusion of the anomalous Ekman transport increases the amount of variance explained by an average 16% throughout the North Atlantic and provides the greatest potential for estimating the variability of the AMOC in density space between 33°N and 54°N. In that latitude range, 70-84% of the variance is explained and the root-mean-square difference is less than 1 Sv when the full run is considered.
Biases in velocity and Q estimates from 3D density structure
NASA Astrophysics Data System (ADS)
P?onka, Agnieszka; Fichtner, Andreas
2015-04-01
We propose to develop a seismic tomography technique that directly inverts for density, using complete seismograms rather than arrival times of certain waves only. The first task in this challenge is to systematically study the imprints of density on synthetic seismograms. To compute the full seismic wavefield in a 3D heterogeneous medium without making significant approximations, we use numerical wave propagation based on a spectral-element discretization of the seismic wave equation. We consider a 2000 by 1000 km wide and 500 km deep spherical section, with the 1D Earth model PREM (with 40 km crust thickness) as a background. Onto this (in the uppermost 40 km) we superimpose 3D randomly generated velocity and density heterogeneities of various magnitudes and correlation lengths. We use different random realizations of heterogeneity distribution. We compare the synthetic seismograms for 3D velocity and density structure with 3D velocity structure and with the 1D background, calculating relative amplitude differences and timeshifts as functions of time and frequency. For 3D density variations of 7 % relative to PREM, the biggest time shifts reach 2.5 s, and the biggest relative amplitude differences approach 90 %. Based on the experimental changes in arrival times and amplitudes, we quantify the biases introduced in velocity and Q estimates when 3D density is not taken into account. For real data the effects may be more severe, given that commonly observed crustal velocity variations of 10-20 % suggest density variations of around 15 % in the upper crust. Our analyses indicate that reasonably sized density variations within the crust can leave a strong imprint on both traveltimes and amplitudes. While this can produce significant biases in velocity and Q estimates, the positive conclusion is that seismic waveform inversion for density may become feasible.
Density estimation of small-mammal populations using a trapping web and distance sampling methods
Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.
1983-01-01
Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.
NASA Astrophysics Data System (ADS)
Park, J.; Lühr, H.; Stolle, C.; Malhotra, G.; Baker, J. B. H.; Buchert, S.; Gill, R.
2015-07-01
Plasma convection in the high-latitude ionosphere provides important information about magnetosphere-ionosphere-thermosphere coupling. In this study we estimate the along-track component of plasma convection within and around the polar cap, using electron density profiles measured by the three Swarm satellites. The velocity values estimated from the two different satellite pairs agree with each other. In both hemispheres the estimated velocity is generally anti-sunward, especially for higher speeds. The obtained velocity is in qualitative agreement with Super Dual Auroral Radar Network data. Our method can supplement currently available instruments for ionospheric plasma velocity measurements, especially in cases where these traditional instruments suffer from their inherent limitations. Also, the method can be generalized to other satellite constellations carrying electron density probes.
NASA Astrophysics Data System (ADS)
Erkyihun, S. T.
2013-12-01
Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.
Multi-Dimensional Density Estimation and Phase Space Structure Of Dark Matter Halos
Sanjib Sharma; Matthias Steinmetz
2008-03-03
We present a method to numerically estimate the densities of a discretely sampled data based on binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles,until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We apply this method to determine the phase space densities of dark matter halos obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function $v(f)$ of phase space density $f$ does not have a constant slope but rather a small hump at high phase space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of $v(f)$ vs $f$ relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behavior as seen in simulations. The use of the presented method is not limited to calculation of phase space densities, but can be used as a general-purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.
Analysis of percent density estimates from digital breast tomosynthesis projection images
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.
2007-03-01
Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PD M) and from individual DBT projections (PD T). We observed good agreement between PD M and PD T from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PD T estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.
Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations
NASA Astrophysics Data System (ADS)
Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik
2009-04-01
Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.
Estimating absolute salinity (SA) in the world's oceans using density and composition
NASA Astrophysics Data System (ADS)
Woosley, Ryan J.; Huang, Fen; Millero, Frank J.
2014-11-01
The practical (Sp) and reference (SR) salinities do not account for variations in physical properties such as density and enthalpy. Trace and minor components of seawater, such as nutrients or inorganic carbon affect these properties. This limitation has been recognized and several studies have been made to estimate the effect of these compositional changes on the conductivity-density relationship. These studies have been limited in number and geographic scope. Here, we combine the measurements of previous studies with new measurements for a total of 2857 conductivity-density measurements, covering all of the world's major oceans, to derive empirical equations for the effect of silica and total alkalinity on the density and absolute salinity of the global oceans and to recommend an equation applicable to most of the world's oceans. The potential impact on salinity as a result of uptake of anthropogenic CO2 is also discussed.
NSDL National Science Digital Library
Miss Witcher
2011-10-06
What is Density? Density is the amount of "stuff" in a given "space". In science terms that means the amount of "mass" per unit "volume". Using units that means the amount of "grams" per "centimeters cubed". Check out the following links and learn about density through song! Density Beatles Style Density Chipmunk Style Density Rap Enjoy! ...
Estimating density dependence in time-series of age-structured populations.
Lande, R; Engen, S; Saether, B-E
2002-01-01
For a life history with age at maturity alpha, and stochasticity and density dependence in adult recruitment and mortality, we derive a linearized autoregressive equation with time-lags of from 1 to alpha years. Contrary to current interpretations, the coefficients for different time-lags in the autoregressive dynamics do not simply measure delayed density dependence, but also depend on life-history parameters. We define a new measure of total density dependence in a life history, D, as the negative elasticity of population growth rate per generation with respect to change in population size, D = - partial differential lnlambda(T)/partial differential lnN, where lambda is the asymptotic multiplicative growth rate per year, T is the generation time and N is adult population size. We show that D can be estimated from the sum of the autoregression coefficients. We estimated D in populations of six avian species for which life-history data and unusually long time-series of complete population censuses were available. Estimates of D were in the order of 1 or higher, indicating strong, statistically significant density dependence in four of the six species. PMID:12396510
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.
2015-01-01
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Scatterer number density considerations in reference phantom-based attenuation estimation.
Rubert, Nicholas; Varghese, Tomy
2014-07-01
Attenuation estimation and imaging have the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue-mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and a signal-to-noise ratio (SNR) approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1647 regions of interest in five ex vivo bovine livers, we obtained an envelope SNR of 1.10 ± 0.12 when the tissue was imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article, we examine attenuation estimation in numerical phantoms, tissue-mimicking phantoms with variable SNDs and ex vivo bovine liver before and after thermal coagulation. We find that reference phantom-based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SNDs, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find that the standard deviation of attenuation slope estimates increases from 0.07 to 0.25 dB/cm-MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in tissue-mimicking phantoms with a large estimation kernel size (16 mm axially × 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (<0.01 dB/cm-MHz). We also compare results obtained with reference phantom-based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Cluster Mass Estimate and a Cusp of the Mass Density Distribution in Clusters of Galaxies
Nobuyoshi Makino; Katsuaki Asano
1998-08-29
We study density cusps in the center of clusters of galaxies to reconcile X-ray mass estimates with gravitational lensing masses. For various mass density models with cusps we compute X-ray surface brightness distribution, and fit them to observations to measure the range of parameters in the density models. The Einstein radii estimated from these density models are compared with Einstein radii derived from the observed arcs for Abell 2163, Abell 2218, and RX J1347.5-1145. The X-ray masses and lensing masses corresponding to these Einstein radii are also compared. While steeper cusps give smaller ratios of lensing mass to X-ray mass, the X-ray surface brightnesses estimated from flatter cusps are better fits to the observations. For Abell 2163 and Abell 2218, although the isothermal sphere with a finite core cannot produce giant arc images, a density model with a central cusp can produce a finite Einstein radius, which is smaller than the observed radii. We find that a total mass density profile which declines as $\\sim r^{-1.4}$ produces the largest radius in models which are consistent with the X-ray surface brightness profile. As the result, the extremely large ratio of the lensing mass to the X-ray mass is improved from 2.2 to 1.4 for Abell 2163, and from 3 to 2.4 for Abell 2218. For RX J1347.5-1145, which is a cooling flow cluster, we cannot reduce the mass discrepancy.
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ? 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ? 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.
Singular value decomposition and density estimation for filtering and analysis of gene expression
Rechtsteiner, A. (Andreas); Gottardo, R. (Raphael); Rocha, L. M. (Luis Mateus); Wall, M. E. (Michael E.)
2003-01-01
We present three algorithms for gene expression analysis. Algorithm 1, known as serial correlation test, is used for filtering out noisy gene expression profiles. Algorithm 2 and 3 project the gene expression profiles into 2-dimensional expression subspaces ident ifiecl by Singular Value Decomposition. Density estimates a e used to determine expression profiles that have a high correlation with the subspace and low levels of noise. High density regions in the projection, clusters of co-expressed genes, are identified. We illustrate the algorithms by application to the yeast cell-cycle data by Cho et.al. and comparison of the results.
Somershoe, S.G.; Twedt, D.J.; Reid, B.
2006-01-01
We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
NASA Astrophysics Data System (ADS)
Rastigejev, Y.; Semakin, A. N.
2013-12-01
Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.
Moment series for moment estimators of the parameters of a Weibull density
Bowman, K.O.; Shenton, L.R.
1982-01-01
Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.
Electron density estimation in cold magnetospheric plasmas with the Cluster Active Archive
NASA Astrophysics Data System (ADS)
Masson, A.; Pedersen, A.; Taylor, M. G.; Escoubet, C. P.; Laakso, H. E.
2009-12-01
Electron density is a key physical quantity to characterize any plasma medium. Its measurement is thus essential to understand the various physical processes occurring in the environment of a magnetized planet. However, any magnetosphere of the solar system is far from being an homogeneous medium with a constant electron density and temperature. For instance, the Earth’s magnetosphere is composed of a variety of regions with densities and temperatures spanning over at least 6 decades of magnitude. For this reason, different types of scientific instruments are usually carried onboard a magnetospheric spacecraft to estimate in situ the electron density of the various plasma regions crossed by different means. In the case of the European Space Agency Cluster mission, five different instruments on each of its four identical spacecraft can be used to estimate it: two particle instruments, a DC electric field instrument, a relaxation sounder and a high-time resolution passive wave receiver. Each of these instruments has its pros and cons depending on the plasma conditions. The focus of this study is the accurate estimation of the electron density in cold plasma regions of the magnetosphere including the magnetotail lobes (Ne ? 0.01 e-/cc, Te ~ 100 eV) and the plasmasphere (Ne> 10 e-/cc, Te <10 eV). In these regions, particle instruments can be blind to low energy ions outflowing from the ionosphere or measuring only a portion of the energy range of the particles due to photoelectrons. This often results in an under estimation of the bulk density. Measurements from a relaxation sounder enables accurate estimation of the bulk electron density above a fraction of 1 e-/cc but requires careful calibration of the resonances and/or the cutoffs detected. On Cluster, active soundings enable to derive precise density estimates between 0.2 and 80 e-/cc every minute or two. Spacecraft-to-probe difference potential measurements from a double probe electric field experiment can be calibrated against the above mentionned types of measurements to derive bulk electron densities with a time resolution below 1 s. Such an in-flight calibration procedure has been performed successfully on past magnetospheric missions such as GEOS, ISEE-1, Viking, Geotail, CRRES or FAST. We will first present the outcome of this calibration procedure for the Cluster mission for plasma conditions encountered in the plasmasphere, the magnetotail lobes and the polar caps. This study is based on the use of the Cluster Active Archive (CAA) for data collected in the plasmasphere. CAA offers the unique possibility to easily access the best calibrated data collected by all experiments on the Cluster satellites over their several years in orbit. This has enabled in particular to take into account the impact of the solar activity in the calibration procedure. Recent science nuggets based on these calibrated data will then be presented showing in particular the outcome of the three dimensional (3D) electron density mapping of the magnetotail lobes over several years.
A method for estimating the height of a mesospheric density level using meteor radar
NASA Astrophysics Data System (ADS)
Younger, J. P.; Reid, I. M.; Vincent, R. A.; Murphy, D. J.
2015-07-01
A new technique for determining the height of a constant density surface at altitudes of 78-85 km is presented. The first results are derived from a decade of observations by a meteor radar located at Davis Station in Antarctica and are compared with observations from the Microwave Limb Sounder instrument aboard the Aura satellite. The density of the neutral atmosphere in the mesosphere/lower thermosphere region around 70-110 km is an essential parameter for interpreting airglow-derived atmospheric temperatures, planning atmospheric entry maneuvers of returning spacecraft, and understanding the response of climate to different stimuli. This region is not well characterized, however, due to inaccessibility combined with a lack of consistent strong atmospheric radar scattering mechanisms. Recent advances in the analysis of detection records from high-performance meteor radars provide new opportunities to obtain atmospheric density estimates at high time resolutions in the MLT region using the durations and heights of faint radar echoes from meteor trails. Previous studies have indicated that the expected increase in underdense meteor radar echo decay times with decreasing altitude is reversed in the lower part of the meteor ablation region due to the neutralization of meteor plasma. The height at which the gradient of meteor echo decay times reverses is found to occur at a fixed atmospheric density. Thus, the gradient reversal height of meteor radar diffusion coefficient profiles can be used to infer the height of a constant density level, enabling the observation of mesospheric density variations using meteor radar.
Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals.
Kéry, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J Andrew
2011-04-01
Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km² (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals. PMID:21166714
Estimation of high-resolution dust column density maps. Empirical model fits
NASA Astrophysics Data System (ADS)
Juvela, M.; Montillaud, J.
2013-09-01
Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.
NASA Astrophysics Data System (ADS)
Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.
2013-05-01
The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
S. Brown; G. Gaston
1995-01-01
One of the most important databases needed for estimating emissions of carbon dioxide resulting from changes in the cover, use, and management of tropical forests is the total quantity of biomass per unit area, referred to as biomass density. Forest inventories have been shown to be valuable sources of data for estimating biomass density, but inventories for the tropics are
Balsa Terzic, Gabriele Bassi
2011-07-01
In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.
Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi
2015-01-01
Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challenging in high-dimensional space. A naive approach to coping with high dimensionality is to first perform dimensionality reduction (DR) and then execute CDE. However, a two-step process does not perform well in practice because the error incurred in the first DR step can be magnified in the second CDE step. In this letter, we propose a novel single-shot procedure that performs CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR as the problem of minimizing a squared-loss variant of conditional entropy, and this is solved using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the usefulness of the proposed method through extensive experiments on various data sets, including humanoid robot transition and computer art. PMID:25380340
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.
2012-06-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.
Estimation of Electron Density Profile Near the Lunar Surface from the AKR Reflections
NASA Astrophysics Data System (ADS)
Goto, Y.; Kasahara, Y.; Moriuchi, R.; Kumamoto, A.; Ono, T.
2012-12-01
Electron density profiles near the lunar surface were estimated from radio occultation measurements from the Soviet Luna spacecraft in the 1970s. The profiles showed 500-1000 /cc peak densities at altitudes of 5-10 km and the densities smoothly decreased with scale heights of 10-30 km both upward and downward. This high-density layer was interpreted to be the lunar ionosphere. Since the lunar atmosphere is extremely tenuous and plasma produced by photoionization is considered to be less dense than solar wind, the measurements taken by the Luna have been viewed with skepticism over the past three decades. On the recent KAGUYA mission, the same kinds of radio occultation experiments were conducted and weak signatures of electron density enhancement with densities on the order of 100 /cc were found below 30 km altitude at solar zenith angles less than 60 degrees. In this study, we deal with a completely different method to estimate the electron density near the lunar surface in which interference patterns of the AKR (auroral kilometric radiation) spectrum observed by the KAGUYA spacecraft were used. The AKR waves were originated from the earth's polar region and were frequently observed by the KAGUYA in lunar orbits. The interference patterns arise from differences of path lengths between waves that had directly arrived and waves that were reflected on the lunar surface. It is noted that the AKR reflection altitude and reflectance can be derived from stripe interval and strength ratio of the interference pattern, respectively, and the electron density near the lunar surface can be derived from such reflection altitudes and reflectance. We calculated the AKR reflection altitude from 160 stripe intervals which had been observed near the terminator regions by the KAGUYA/NPW. As a result, the reflection altitude was 1,740 km in average, and standard deviation was 8.7 km. Considering the mean radius of the moon is 1,737 km, the AKR waves were reflected at several kilometers above the lunar surface at most. Because grazing angles of the incident AKR waves were extremely small, there were no dense layers above the AKR reflection altitude. From the estimated reflectance, it is also found that only a part of the AKR energy was reflected. This result means that there are no thick reflection layers, compared with AKR wave length, near the lunar surface.
Kernel density estimation and K-means clustering to profile road accident hotspots.
Anderson, Tessa K
2009-05-01
Identifying road accident hotspots is a key role in determining effective strategies for the reduction of high density areas of accidents. This paper presents (1) a methodology using Geographical Information Systems (GIS) and Kernel Density Estimation to study the spatial patterns of injury related road accidents in London, UK and (2) a clustering methodology using environmental data and results from the first section in order to create a classification of road accident hotspots. The use of this methodology will be illustrated using the London area in the UK. Road accident data collected by the Metropolitan Police from 1999 to 2003 was used. A kernel density estimation map was created and subsequently disaggregated by cell density to create a basic spatial unit of an accident hotspot. Appended environmental data was then added to the hotspot cells and using K-means clustering, an outcome of similar hotspots was deciphered. Five groups and 15 clusters were created based on collision and attribute data. These clusters are discussed and evaluated according to their robustness and potential uses in road safety campaigning. PMID:19393780
Examining the impact of the precision of address geocoding on estimated density of crime locations
NASA Astrophysics Data System (ADS)
Harada, Yutaka; Shimada, Takahito
2006-10-01
This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.
Estimation of Graphite Density and mechanical Strength of VHTR during Air-Ingress Accident
Chang Oh; Eung Soo Kim; Hee Cheon No; Byung Jun Kim
2007-09-01
An air-ingress accident in a VHTR is anticipated to cause severe changes of graphite density and mechanical strength by oxidation process resulting in many side effects. However, the quantitative estimation has not been performed yet. In this study, the focus has been on the prediction of graphite density change and mechanical strength using a thermal hydraulic system analysis code. For analysis of the graphite density change, a simple graphite burn-off model was developed based on the similarity concept between parallel electrical circuit and graphite oxidation considering the overall changes of the graphite geometry and density. The developed model was implemented in the VHTR system analysis code, GAMMA, along with other comprehensive graphite oxidation models. As a reference reactor, GT-MHR 600 MWt reactor was selected. From the calculation, it was observed that the main oxidation process was derived 5.5 days after the accident following natural convection. The core maximum temperature reached up to 1400 C. However it never exceeded the maximum temperature criteria, 1600 C. According to the calculation results, the most oxidation occurs in the bottom reflector, so the exothermic heat generated by oxidation did not affect the core heat up. However, the oxidation process highly decreased the density of the bottom reflector making it vulnerable to mechanical stress. In fact, since the bottom reflector sustains the reactor core, the stress is highly concentrated on this part. The calculations were made for up to 11 days after the accident and 4.5% of density decrease was estimated resulting in 25% mechanical strength reduction.
NSDL National Science Digital Library
Mr. Hansen
2010-10-26
What is density? Density is a relationship between mass (usually in grams or kilograms) and volume (usually in L, mL or cm 3 ). Below are several sights to help you further understand the concept of density. Click the following link to review the concept of density. Be sure to read each slide and watch each video: Chemistry Review: Density Watch the following video: Pop density video The following is a fun interactive sight you can use to review density. Your job is #1, to play and #2 to calculate the density of the ...
Sadeh, Iftach; Lahav, Ofer
2015-01-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...
Phase-space structures - I. A comparison of 6D density estimators
NASA Astrophysics Data System (ADS)
Maciejewski, M.; Colombi, S.; Alard, C.; Bouchet, F.; Pichon, C.
2009-03-01
In the framework of particle-based Vlasov systems, this paper reviews and analyses different methods recently proposed in the literature to identify neighbours in 6D space and estimate the corresponding phase-space density. Specifically, it compares smoothed particle hydrodynamics (SPH) methods based on tree partitioning to 6D Delaunay tessellation. This comparison is carried out on statistical and dynamical realizations of single halo profiles, paying particular attention to the unknown scaling, SG, used to relate the spatial dimensions to the velocity dimensions. It is found that, in practice, the methods with local adaptive metric provide the best phase-space estimators. They make use of a Shannon entropy criterion combined with a binary tree partitioning and with subsequent SPH interpolation using 10-40 nearest neighbours. We note that the local scaling SG implemented by such methods, which enforces local isotropy of the distribution function, can vary by about one order of magnitude in different regions within the system. It presents a bimodal distribution, in which one component is dominated by the main part of the halo and the other one is dominated by the substructures of the halo. While potentially better than SPH techniques, since it yields an optimal estimate of the local softening volume (and therefore the local number of neighbours required to perform the interpolation), the Delaunay tessellation in fact generally poorly estimates the phase-space distribution function. Indeed, it requires, prior to its implementation, the choice of a global scaling SG. We propose two simple but efficient methods to estimate SG that yield a good global compromise. However, the Delaunay interpolation still remains quite sensitive to local anisotropies in the distribution. To emphasize the advantages of 6D analysis versus traditional 3D analysis, we also compare realistic 6D phase-space density estimation with the proxy proposed earlier in the literature, Q = ?/?3, where ? is the local 3D (projected) density and 3?2 is the local 3D velocity dispersion. We show that Q only corresponds to a rough approximation of the true phase-space density, and is not able to capture all the details of the distribution in phase space, ignoring, in particular, filamentation and tidal streams.
NASA Astrophysics Data System (ADS)
Zamani, Ahmad; Kolahi Azar, Amir; Safavi, Ali
2014-06-01
This paper presents a wavelet-based multifractal approach to characterize the statistical properties of temporal distribution of the 1982-2012 seismic activity in Mammoth Mountain volcano. The fractal analysis of time-occurrence series of seismicity has been carried out in relation to seismic swarm in association with magmatic intrusion happening beneath the volcano on 4 May 1989. We used the wavelet transform modulus maxima based multifractal formalism to get the multifractal characteristics of seismicity before, during, and after the unrest. The results revealed that the earthquake sequences across the study area show time-scaling features. It is clearly perceived that the multifractal characteristics are not constant in different periods and there are differences among the seismicity sequences. The attributes of singularity spectrum have been utilized to determine the complexity of seismicity for each period. Findings show that the temporal distribution of earthquakes for swarm period was simpler with respect to pre- and post-swarm periods.
Gopalakrishna, Vanishree; Kehtarnavaz, Nasser; Loizou, Philipos C
2010-08-01
This paper presents a wavelet-based speech coding strategy for cochlear implants. In addition, it describes the real-time implementation of this strategy on a personal digital assistant (PDA) platform. Three wavelet packet decomposition tree structures are considered and their performance in terms of computational complexity, spectral leakage, fixed-point accuracy, and real-time processing are compared to other commonly used strategies in cochlear implants. A real-time mechanism is introduced for updating the wavelet coefficients recursively. It is shown that the proposed strategy achieves higher analysis rates than the existing strategies while being able to run in real time on a PDA platform. In addition, it is shown that this strategy leads to a lower amount of spectral leakage. The PDA implementation is made interactive to allow users to easily manipulate the parameters involved and study their effects. PMID:20403778
Johannesen, L; Grove, Usl; Sørensen, Js; Schmidt, Ml; Couderc, J-P; Graff, C
2010-01-01
Quantitative analysis of the electrocardiogram (ECG) requires delineation and classification of the individual ECG wave patterns. We propose a wavelet-based waveform classifier that uses the fiducial points identified by a delineation algorithm. For validation of the algorithm, manually annotated ECG records from the QT database (Physionet) were used. ECG waveform classification accuracies were: 85.6% (P-wave), 89.7% (QRS complex), 92.8% (T-wave) and 76.9% (U-wave). The proposed classification method shows that it is possible to classify waveforms based on the points obtained during delineation. This approach can be used to automatically classify wave patterns in long-term ECG recordings such as 24-hour Holter recordings. PMID:21779544
Johannesen, L; Grove, USL; Sørensen, JS; Schmidt, ML; Couderc, J-P; Graff, C
2011-01-01
Quantitative analysis of the electrocardiogram (ECG) requires delineation and classification of the individual ECG wave patterns. We propose a wavelet-based waveform classifier that uses the fiducial points identified by a delineation algorithm. For validation of the algorithm, manually annotated ECG records from the QT database (Physionet) were used. ECG waveform classification accuracies were: 85.6% (P-wave), 89.7% (QRS complex), 92.8% (T-wave) and 76.9% (U-wave). The proposed classification method shows that it is possible to classify waveforms based on the points obtained during delineation. This approach can be used to automatically classify wave patterns in long-term ECG recordings such as 24-hour Holter recordings. PMID:21779544
NASA Astrophysics Data System (ADS)
Zhong, Junmei; Ning, Ruola; Conover, David L.
2004-05-01
The real-time flat panel detector-based cone beam CT breast imaging (FPD-CBCTBI) has attracted increasing attention for its merits of early detection of small breast cancerous tumors, 3-D diagnosis, and treatment planning with glandular dose levels not exceeding those of conventional film-screen mammography. In this research, our motivation is to further reduce the x-ray exposure level for the cone beam CT scan while retaining acceptable image quality for medical diagnosis by applying efficient denoising techniques. In this paper, the wavelet-based multiscale anisotropic diffusion algorithm is applied for cone beam CT breast imaging denoising. Experimental results demonstrate that the denoising algorithm is very efficient for cone bean CT breast imaging for noise reduction and edge preservation. The denoising results indicate that in clinical applications of the cone beam CT breast imaging, the patient"s radiation dose can be reduced by up to 60% while obtaining acceptable image quality for diagnosis.
NASA Astrophysics Data System (ADS)
Sondhiya, Deepak Kumar; Gwal, Ashok Kumar; Verma, Shivali; Kasde, Satish Kumar
Abstract: In this paper, a wavelet-based neural network system for the detection and identification of four types of VLF whistler’s transients (i.e. dispersive, diffuse, spiky and multipath) is implemented and tested. The discrete wavelet transform (DWT) technique is integrated with the feed forward neural network (FFNN) model to construct the identifier. First, the multi-resolution analysis (MRA) technique of DWT and the Parseval’s theorem are employed to extract the characteristics features of the transients at different resolution levels. Second, the FFNN identifies these extracted features to identify the transients according to the features extracted. The proposed methodology can reduce a great quantity of the features of transients without losing its original property; less memory space and computing time are required. Various transient events are tested; the results show that the identifier can detect whistler transients efficiently. Keywords: Discrete wavelets transform, Multi-resolution analysis, Parseval’s theorem and Feed forward neural network
A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis
NASA Astrophysics Data System (ADS)
Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.
2014-09-01
This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.
NSDL National Science Digital Library
2010-07-12
This page introduces students to the concept of density by presenting its definition, formula, and two blocks representing materials of different densities. Students are given the mass and volume of each block and asked to calculate the density. Their answers are then compared against a table of densities of common objects (air, wood, gold, etc.) and students must determine, using the density of the blocks, which substance makes up each block.
Boersen, M.R.; Clark, J.D.; King, T.L.
2003-01-01
The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694
Kernel density estimation-based real-time prediction for respiratory motion
NASA Astrophysics Data System (ADS)
Ruan, Dan
2010-03-01
Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the standard deviation of the observed data as the error metric. Furthermore, we compared the proposed method with two benchmark methods: most recent sample and an adaptive linear filter. The kernel density estimation-based prediction results demonstrate universally significant improvement over the alternatives and are especially valuable for long lookahead time, when the alternative methods fail to produce useful predictions.
The Effects of Surfactants on the Estimation of Bacterial Density in Petroleum Samples
NASA Astrophysics Data System (ADS)
Luna, Aderval Severino; da Costa, Antonio Carlos Augusto; Gonçalves, Márcia Monteiro Machado; de Almeida, Kelly Yaeko Miyashiro
The effect of the surfactants polyoxyethylene monostearate (Tween 60), polyoxyethylene monooleate (Tween 80), cetyl trimethyl ammonium bromide (CTAB), and sodium dodecyl sulfate (SDS) on the estimation of bacterial density (sulfate-reducing bacteria [SRB] and general anaerobic bacteria [GAnB]) was examined in petroleum samples. Three different compositions of oil and water were selected to be representative of the real samples. The first one contained a high content of oil, the second one contained a medium content of oil, and the last one contained a low content of oil. The most probable number (MPN) was used to estimate the bacterial density. The results showed that the addition of surfactants did not improve the SRB quantification for the high or medium oil content in the petroleum samples. On other hand, Tween 60 and Tween 80 promoted a significant increase on the GAnB quantification at 0.01% or 0.03% m/v concentrations, respectively. CTAB increased SRB and GAnB estimation for the sample with a low oil content at 0.00005% and 0.0001% m/v, respectively.
NASA Astrophysics Data System (ADS)
Waters, Daniel F.; Cadou, Christopher P.
2014-02-01
A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (?15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.
Estimation of dislocation density from precession electron diffraction data using the Nye tensor.
Leff, A C; Weinberger, C R; Taheri, M L
2015-06-01
The Nye tensor offers a means to estimate the geometrically necessary dislocation density of a crystalline sample based on measurements of the orientation changes within individual crystal grains. In this paper, the Nye tensor theory is applied to precession electron diffraction automated crystallographic orientation mapping (PED-ACOM) data acquired using a transmission electron microscope (TEM). The resulting dislocation density values are mapped in order to visualize the dislocation structures present in a quantitative manner. These density maps are compared with other related methods of approximating local strain dependencies in dislocation-based microstructural transitions from orientation data. The effect of acquisition parameters on density measurements is examined. By decreasing the step size and spot size during data acquisition, an increasing fraction of the dislocation content becomes accessible. Finally, the method described herein is applied to the measurement of dislocation emission during in situ annealing of Cu in TEM in order to demonstrate the utility of the technique for characterizing microstructural dynamics. PMID:25697461
Estimating Absolute Salinity (SA) in the World's Oceans Using Density and Composition
NASA Astrophysics Data System (ADS)
Woosley, R. J.; Huang, F.; Millero, F. J., Jr.
2014-12-01
The practical salinity (Sp), which is determined by the relationship of conductivity to the known proportions of the major components of seawater, and reference salinity (SR = (35.16504/35)*Sp), do not account for variations in physical properties such as density and enthalpy. Trace and minor components of seawater, such as nutrients or inorganic carbon and total alkalinity affect these properties and contribute to the absolute salinity (SA). This limitation has been recognized and several studies have been made to estimate the effect of these compositional changes on the conductivity-density relationship. These studies have been limited in number and geographic scope. Here, we combine the measurements of previous studies with new measurements for a total of 2,857 conductivity-density measurements covering all of the world's major oceans to derive empirical equations for the effect of silica and total alkalinity on the density and absolute salinity of the global oceans and recommend an equation applicable to most of the world oceans. The potential impact on salinity as a result of uptake of anthropogenic CO2 is also discussed.
NASA Astrophysics Data System (ADS)
Übeyli, Mustafa; Übeyli, Elif Derya
2008-12-01
Artificial neural networks (ANNs) have recently been utilized in the nuclear technology applications since they are fast, precise and flexible vehicles to modeling, simulation and optimization. This paper presents a new approach based on multilayer perceptron neural networks (MLPNNs) for the estimation of some important neutronic parameters (net 239Pu production, tritium breeding ratio, cumulative fissile fuel enrichment, and fission rate) of a high power density fusion-fission (hybrid) reactor using light water reactor (LWR) spent fuel. A comparison of the results obtained by the MLPNNs and those found by using the code (Scale 4.3) was carried out. The results pointed out that the MLPNNs trained with least mean squares (LMS) algorithm could provide an accurate computation of the main neutronic parameters for the high power density reactor.
Bayesian estimate of the zero-density frequency of a Cs fountain
Calonico, D; Lorini, L; Mana, G
2009-01-01
Caesium fountain frequency-standards realize the second in the International System of Units with a relative uncertainty approaching 10^-16. Among the main contributions to the accuracy budget, cold collisions play an important role because of the atomic density shift of the reference atomic transition. This paper describes an application of the Bayesian analysis of the clock frequency to estimate the density shift and describes how the Bayes theorem allows the a priori knowledge of the sign of the collisional coefficient to be rigourously embedded into the analysis. As an application, data from the INRIM caesium fountain are used and the Bayesian and orthodox analyses are compared. The Bayes theorem allows the orthodox uncertainty to be reduced by 28% and demonstrates to be an important tool in primary frequency-metrology.
A maximum volume density estimator generalised over a proper motion limited sample
Lam, M C; Hambly, N C
2015-01-01
The traditional Schmidt density estimator has been proven to be unbiased and effective in a magnitude limited sample. Previously, efforts have been made to generalise it for populations with non-uniform density and proper motion limited cases. This work shows that the then good assumptions for a proper motion limited sample are no longer sufficient to cope with modern data. Populations with larger differences in the kinematics as compared to the Local Standard of Rest are most severely affected. We show that this systematic bias can be removed by treating the discovery fraction inseparable from the generalised maximum volume integrand. The treatment can be applied to any proper motion limited sample with good knowledge of the kinematics. This work demonstrates the method through application to a mock catalogue of a white dwarf-only solar neighbourhood for various scenarios and compared against the traditional treatment using a survey with Pan-STARRS-like characteristics.
Simple method to estimate MOS oxide-trap, interface-trap, and border-trap densities
Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R.
1993-09-01
Recent work has shown that near-interfacial oxide traps that communicates with the underlaying Si (``border traps``) can play a significant role in determining MOS radiation response and long-term reliability. Thermally-stimulated-current 1/f noise, and frequency-dependent charge-pumping measurements have been used to estimate border-trap densities in MOS structures. These methods all require high-precision, low-noise measurements that are often difficult to perform and interpret. In this summary, we describe a new dual-transistor method to separate bulk-oxide-trap, interface-trap, and border-trap densities in irradiated MOS transistors that requires only standard threshold-voltage and high-frequency charge-pumping measurements.
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
NASA Technical Reports Server (NTRS)
Depaola, B. D.; Marcum, S. D.; Wrench, H. K.; Whitten, B. L.; Wells, W. E.
1979-01-01
It is very useful to have a method of estimation for electron temperature and electron densities in nuclear pumped plasmas because measurements of such quantities are very difficult. This paper describes a method, based on rate equation analysis of the ionized species in the plasma and the electron energy balance. In addition to the ionized species, certain neutral species must also be calculated. Examples are given for pure helium and a mixture of helium and argon. In the HeAr case, He(+), He2(+), He/2 3S/, Ar(+), Ar2(+), and excited Ar are evaluated.
Validation tests of an improved kernel density estimation method for identifying disease clusters
Cai, Qiang [University of Iowa; Rushton, Gerald [University of Iowa; Bhaduri, Budhendra L [ORNL
2011-01-01
The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method that include a spatial basis of support designed to give a constant standard error for the standardized mortality/morbidity rate; a stair-case weight method for weighting observations to reduce estimation bias; and a method for selecting parameters to control three measures of performance of the method: sensitivity, specificity and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.
NASA Astrophysics Data System (ADS)
Edwards, Matthew C.; Meyer, Renate; Christensen, Nelson
2015-09-01
The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a blocked Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with nonstationary data by breaking longer data streams into smaller and locally stationary components.
ANNz2 - Photometric redshift and probability density function estimation using machine-learning
NASA Astrophysics Data System (ADS)
Sadeh, Iftach
2014-05-01
Large photometric galaxy surveys allow the study of questions at the forefront of science, such as the nature of dark energy. The success of such surveys depends on the ability to measure the photometric redshifts of objects (photo-zs), based on limited spectral data. A new major version of the public photo-z estimation software, ANNz , is presented here. The new code incorporates several machine-learning methods, such as artificial neural networks and boosted decision/regression trees, which are all used in concert. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions in two independent ways.
Estimates of Leaf Vein Density Are Scale Dependent1[C][W][OPEN
Price, Charles A.; Munro, Peter R.T.; Weitz, Joshua S.
2014-01-01
Leaf vein density (LVD) has garnered considerable attention of late, with numerous studies linking it to the physiology, ecology, and evolution of land plants. Despite this increased attention, little consideration has been given to the effects of measurement methods on estimation of LVD. Here, we focus on the relationship between measurement methods and estimates of LVD. We examine the dependence of LVD on magnification, field of view (FOV), and image resolution. We first show that estimates of LVD increase with increasing image magnification and resolution. We then demonstrate that estimates of LVD are higher with higher variance at small FOV, approaching asymptotic values as the FOV increases. We demonstrate that these effects arise due to three primary factors: (1) the tradeoff between FOV and magnification; (2) geometric effects of lattices at small scales; and; (3) the hierarchical nature of leaf vein networks. Our results help to explain differences in previously published studies and highlight the importance of using consistent magnification and scale, when possible, when comparing LVD and other quantitative measures of venation structure across leaves. PMID:24259686
Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.
2012-01-01
Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID:23209547
Accuracy of estimated geometric parameters of trees depending on the LIDAR data density
NASA Astrophysics Data System (ADS)
Hadas, Edyta; Estornell, Javier
2015-04-01
The estimation of dendrometric variables has become important for spatial planning and agriculture projects. Because classical field measurements are time consuming and inefficient, airborne LiDAR (Light Detection and Ranging) measurements are successfully used in this area. Point clouds acquired for relatively large areas allows to determine the structure of forestry and agriculture areas and geometrical parameters of individual trees. In this study two LiDAR datasets with different densities were used: sparse with average density of 0.5pt/m2 and the dense with density of 4pt/m2. 25 olive trees were selected and field measurements of tree height, crown bottom height, length of crown diameters and tree position were performed. To determine the tree geometric parameters from LiDAR data, two independent strategies were developed that utilize the ArcGIS, ENVI and FUSION software. Strategy a) was based on canopy surface model (CSM) slicing at 0.5m height and in strategy b) minimum bounding polygons as tree crown area were created around detected tree centroid. The individual steps were developed to be applied also in automatic processing. To assess the performance of each strategy with both point clouds, the differences between the measured and estimated geometric parameters of trees were analyzed. As expected, the tree height were underestimated for both strategies (RMSE=0.7m for dense dataset and RMSE=1.5m for sparse) and tree crown height were overestimated (RMSE=0.4m and RMSE=0.7m for dense and sparse dataset respectively). For dense dataset, strategy b) allows to determine more accurate crown diameters (RMSE=0.5m) than strategy a) (RMSE=0.8m), and for sparse dataset, only strategy a) occurs to be relevant (RMSE=1.0m). The accuracy of strategies were also examined for their dependency on tree size. For dense dataset, the larger the tree (height or crown longer diameter), the higher was the error of estimated tree height, and for sparse dataset, the larger the tree, the higher was the error of estimated crown bottom height. Finally, the spatial distribution of points inside the tree crown was analyzed, by creating a normalized tree crown. It confirms a high concentration of LiDAR points inside the central part of a tree.
NSDL National Science Digital Library
Day, Martha Marie
This web page introduces the concepts of density and buoyancy. The discovery in ancient Greece by Archimedes is described. The densities of various materials are given and temperature effects introduced. Links are provided to news and other resources related to mass density. This is part of the Vision Learning collection of short online modules covering topics in a broad range of science and math topics.
Fiora, Alessandro; Cescatti, Alessandro
2006-09-01
Daily and seasonal patterns in radial distribution of sap flux density were monitored in six trees differing in social position in a mixed coniferous stand dominated by silver fir (Abies alba Miller) and Norway spruce (Picea abies (L.) Karst) in the Alps of northeastern Italy. Radial distribution of sap flux was measured with arrays of 1-cm-long Granier probes. The radial profiles were either Gaussian or decreased monotonically toward the tree center, and seemed to be related to social position and crown distribution of the trees. The ratio between sap flux estimated with the most external sensor and the mean flux, weighted with the corresponding annulus areas, was used as a correction factor (CF) to express diurnal and seasonal radial variation in sap flow. During sunny days, the diurnal radial profile of sap flux changed with time and accumulated photosynthetic active radiation (PAR), with an increasing contribution of sap flux in the inner sapwood during the day. Seasonally, the contribution of sap flux in the inner xylem increased with daily cumulative PAR and the variation of CF was proportional to the tree diameter, ranging from 29% for suppressed trees up to 300% for dominant trees. Two models were developed, relating CF with PAR and tree diameter at breast height (DBH), to correct daily and seasonal estimates of whole-tree and stand sap flow obtained by assuming uniform sap flux density over the sapwood. If the variability in the radial profile of sap flux density was not accounted for, total stand transpiration would be overestimated by 32% during sunny days and 40% for the entire season. PMID:16740497
Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar
2014-01-01
Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406
Wavelet-Based Trend Detection and Estimation Peter F. Craigmile1 and Donald B. Percival2,3.
Percival, Don
that the expected value of X(t) is zero. There is no commonly accepted precise definition for trend in the presense of stochastic noise arises in a number of important environmental applications (one example, the information that these coefficients capture agrees well with the notion of trend. The general idea behind
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Bayes and Empirical Bayes Estimators of Abundance and Density from Spatial Capture-Recapture Data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325
A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti
Villela, Daniel A. M.; Codeço, Claudia T.; Figueiredo, Felipe; Garcia, Gabriela A.; Maciel-de-Freitas, Rafael; Struchiner, Claudio J.
2015-01-01
Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals’ refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80–1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3–5.9). The hierarchical model also performed better than the commonly used Fisher-Ford’s method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations. PMID:25906323
Comparison of breast percent density estimation from raw versus processed digital mammograms
NASA Astrophysics Data System (ADS)
Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina
2011-03-01
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.
Dorazio, Robert M
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses. PMID:24386325
Monte Carlo Mesh Tallies based on a Kernel Density Estimator Approach
NASA Astrophysics Data System (ADS)
Dunn, Kerry L.
Kernel density estimators (KDE) are considered for use with the Monte Carlo transport method as an alternative to conventional methods for solving fixed-source problems on arbitrary 3D input meshes. Since conventional methods produce a piecewise constant approximation, their accuracy can suffer when using coarse meshes to approximate neutron flux distributions with strong gradients. Comparatively, KDE mesh tallies produce point estimates independently of the mesh structure, which means that their values will not change even if the mesh is refined. A new KDE integral-track estimator is introduced in this dissertation for use with mesh tallies. Two input parameters are needed, namely a bandwidth and kernel. The bandwidth is equivalent to choosing mesh cell size, whereas the kernel determines the weight of each contribution with respect to its distance from the calculation point being evaluated. The KDE integral-track estimator is shown to produce more accurate results than the original KDE track length estimator, with no performance penalty, and identical or comparable results to conventional methods. However, unlike conventional methods, KDE mesh tallies can use different bandwidths and kernels to improve accuracy without changing the input mesh. This dissertation also explores the accuracy and efficiency of the KDE integral-track mesh tally in detail. Like other KDE applications, accuracy is highly dependent on the choice of bandwidth. This choice becomes even more important when approximating regions of the neutron flux distribution with high curvature, where changing the bandwidth is much more sensitive. Other factors that affect accuracy include properties of the kernel, and the boundary bias effect for calculation points near external geometrical boundaries. Numerous factors also affect efficiency, with the most significant being the concept of the neighborhood region. The neighborhood region determines how many calculation points are expected to add non-trivial contributions, which depends on node density, bandwidth, kernel, and properties of the track being tallied. The KDE integral-track mesh tally is a promising alternative for solving fixed-source problems on arbitrary 3D input meshes. Producing results at specific points rather than cell-averaged values allows a more accurate representation of the neutron flux distribution to be obtained, especially when coarser meshes are used.
Giambini, Hugo; Dragomir-Daescu, Dan; Huddleston, Paul M; Camp, Jon J; An, Kai-Nan; Nassr, Ahmad
2015-11-01
Osteoporosis is characterized by bony material loss and decreased bone strength leading to a significant increase in fracture risk. Patient-specific quantitative computed tomography (QCT) finite element (FE) models may be used to predict fracture under physiological loading. Material properties for the FE models used to predict fracture are obtained by converting grayscale values from the CT into volumetric bone mineral density (vBMD) using calibration phantoms. If there are any variations arising from the CT acquisition protocol, vBMD estimation and material property assignment could be affected, thus, affecting fracture risk prediction. We hypothesized that material property assignments may be dependent on scanning and postprocessing settings including voltage, current, and reconstruction kernel, thus potentially having an effect in fracture risk prediction. A rabbit femur and a standard calibration phantom were imaged by QCT using different protocols. Cortical and cancellous regions were segmented, their average Hounsfield unit (HU) values obtained and converted to vBMD. Estimated vBMD for the cortical and cancellous regions were affected by voltage and kernel but not by current. Our study demonstrated that there exists a significant variation in the estimated vBMD values obtained with different scanning acquisitions. In addition, the large noise differences observed utilizing different scanning parameters could have an important negative effect on small subregions containing fewer voxels. PMID:26355694
He,P.; Blaskiewicz, M.; Fischer, W.
2009-01-02
In this report we summarize electron-cloud simulations for the RHIC dipole regions at injection and transition to estimate if scrubbing over practical time scales at injection would reduce the electron cloud density at transition to significantly lower values. The lower electron cloud density at transition will allow for an increase in the ion intensity.
Priebe, Carey E.
Filtered Kernel Density Estimation David J. Marchette1, Carey E. Priebe2, George W. Rogers1, Jeffrey L. Solka1 1Naval Surface Warfare Center, Dahlgren Div, B10 Dahlgren, Virginia 22448 2Department of either extreme exemplified by equations (1) and (2). Figure 1(a) is an example of the kind of density
Bjtirn Kjerfve; L. HAROLD STEVENSON; JEFFREY A. PROEHL; THOMAS H. CHRZANOWSKI; WILEY M. KITCHENS
1981-01-01
Estuarine budget studies often suffer from uncertainties of net flux estimates in view of large temporal and spatial variabilities. Optimum spatial measurement density and material flux errors for a reasonably well mixed estuary were estimated by sampling 10 stations from surface to bottom simultaneously every hour for two tidal cycles in a 320-m-wide cross section in North Inlet, South Carolina.
Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.
2012-12-01
We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Estimation of effective scatterer size and number density in near-infrared tomography
NASA Astrophysics Data System (ADS)
Wang, Xin
2007-05-01
Light scattering from tissue originates from the fluctuations in intra-cellular and extra-cellular components, so it is possible that macroscopic scattering spectroscopy could be used to quantify sub-microscopic structures. Both electron microscopy (EM) and optical phase contrast microscopy were used to study the origin of scattering from tissue. EM studies indicate that lipid-bound particle sizes appear to be distributed as a monotonic exponential function, with sub-micron structures dominating the distribution. Given assumptions about the index of refraction change, the shape of the scattering spectrum in the near infrared as measured through bulk tissue is consistent with what would be predicted by Mie theory with these particle size histograms. The relative scattering intensity of breast tissue sections (including 10 normal & 23 abnormal) were studied by phase contrast microscopy. Results show that stroma has higher scattering than epithelium tissue, and fat has the lowest values; tumor epithelium has lower scattering than the normal epithelium; stroma associated with tumor has lower scattering than the normal stroma. Mie theory estimation scattering spectra, was used to estimate effective particle size values, and this was applied retrospectively to normal whole breast spectra accumulated in ongoing clinical exams. The effective sizes ranged between 20 and 1400 nm, which are consistent with subcellular organelles and collagen matrix fibrils discussed previously. This estimation method was also applied to images from cancer regions, with results indicating that the effective scatterer sizes of region of interest (ROI) are pretty close to that of the background for both the cancer patients and benign patients; for the effective number density, there is a big difference between the ROI and background for the cancer patients, while for the benign patients, the value of ROI are relatively close to that of the background. Ongoing MRI-guided NIR studies indicated that the fibroglandular tissue had smaller effective scatterer size and larger effective number density than the adipose tissue. The studies in this thesis provide an interpretive approach to estimate average morphological scatter parameters of bulk tissue, through interpretation of diffuse scattering as coming from effective Mie scatterers.
Lee, Sooyeul; Jeong, Ji-Wook; Lee, Jeong Won; Yoo, Done-Sik; Kim, Seunghwan
2006-01-01
Osteoporosis is characterized by an abnormal loss of bone mineral content, which leads to a tendency to non-traumatic bone fractures or to structural deformations of bone. Thus, bone density measurement has been considered as a most reliable method to assess bone fracture risk due to osteoporosis. In past decades, X-ray images have been studied in connection with the bone mineral density estimation. However, the estimated bone mineral density from the X-ray image can undergo a relatively large accuracy or precision error. The most relevant origin of the accuracy or precision error may be unstable X-ray image acquisition condition. Thus, we focus our attentions on finding a bone mineral density estimation method that is relatively insensitive to the X-ray image acquisition condition. In this paper, we develop a simple technique for distal radius bone mineral density estimation using the trabecular bone filling factor in the X-ray image and apply the technique to the wrist X-ray images of 20 women. Estimated bone mineral density shows a high linear correlation with a dual-energy X-ray absorptiometry (r=0.87). PMID:17945688
Power-Spectral density estimate of the Bloor-Gerrard instability in flows around circular cylinders
NASA Astrophysics Data System (ADS)
Khor, M.; Sheridan, J.; Hourigan, K.
2011-03-01
There have been differences in the literature concerning the power law relationship between the Bloor-Gerrard instability frequency of the separated shear layer from the circular cylinder, the Bénard-von Kármán vortex shedding frequency and the Reynolds number. Most previous experiments have shown a significant degree of scatter in the measurement of the development of the shear layer vortices. Shear layers are known to be sensitive to external influences, which can provide a by-pass transition to saturated growth, thereby camouflaging the fastest growing linear modes. Here, the spatial amplification rates of the shear layer instabilities are calculated using power-spectral density estimates, allowing the fastest growing modes rather than necessarily the largest structures to be determined. This method is found to be robust in determining the fastest growing modes, producing results consistent with the low scatter results of previous experiments.
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?
NASA Astrophysics Data System (ADS)
Bandyopadhyay, M.; Sudhir, Dass; Chakraborty, A.
2015-04-01
To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (˜ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.
Statistical estimation of femur micro-architecture using optimal shape and density predictors.
Lekadir, Karim; Hazrati-Marangalou, Javad; Hoogendoorn, Corné; Taylor, Zeike; van Rietbergen, Bert; Frangi, Alejandro F
2015-02-26
The personalization of trabecular micro-architecture has been recently shown to be important in patient-specific biomechanical models of the femur. However, high-resolution in vivo imaging of bone micro-architecture using existing modalities is still infeasible in practice due to the associated acquisition times, costs, and X-ray radiation exposure. In this study, we describe a statistical approach for the prediction of the femur micro-architecture based on the more easily extracted subject-specific bone shape and mineral density information. To this end, a training sample of ex vivo micro-CT images is used to learn the existing statistical relationships within the low and high resolution image data. More specifically, optimal bone shape and mineral density features are selected based on their predictive power and used within a partial least square regression model to estimate the unknown trabecular micro-architecture within the anatomical models of new subjects. The experimental results demonstrate the accuracy of the proposed approach, with average errors of 0.07 for both the degree of anisotropy and tensor norms. PMID:25624314
Estimation of D-region Electron Density using Tweeks Measurements at Nainital and Allahabad
NASA Astrophysics Data System (ADS)
Pant, P.; Maurya, A. K.; Singh, Rajesh; Veenadhari, B.; Singh, A. K.
2010-10-01
Lightning generated radio atmospheric that propagates over long distances via multiple reflections through the boundaries of the Earth-ionosphere waveguide (EIWG), shows sharp dispersion near the cut-off frequency ˜1.8 kHz of the EIWG. These dispersed atmospherics at lower frequency end are called as `tweek' radio atmospherics. In order to estimate D-region electron densities at the ionospheric reflection heights we have utilized the dispersive property of tweeks observed at low latitude Indian stations Nainital (Geomag. Lat., 20.29° N) and Allahabad (Geomag. Lat., 16.05° N). Direction finding technique has also been applied to determine the source locations of causative lightning discharge of tweeks. In this perspective, the geographic locations is determined by the intersection of two circles that are drawn by taking the travelled / propagation distance by tweek atmospherics from source location to Allahabad (ALD) and Nainital (NTL) stations. These results are in good agreement with World Wide Lightning Location Network (WWLLN) data. The average D-region electron density along the propagation path varied in the range ˜20-35 el/cc at ionospheric reflection heights of 70-90 km. The tweek method has unique advantage of monitoring lower boundary of the D-region over an area of several thousand of km surrounding to the receiving stations.
NASA Astrophysics Data System (ADS)
Veynante, Denis; Lodato, Guido; Domingo, Pascale; Vervisch, Luc; Hawkes, Evatt R.
2010-07-01
Turbulence motions are, by nature, three-dimensional while planar imaging techniques, widely used in turbulent combustion, give only access to two-dimensional information. For example, to extract flame surface densities, a key ingredient of some turbulent combustion models, from planar images implicitly assumes an instantaneously two-dimensional flow, neglecting the unresolved flame front wrinkling. The objective here is to estimate flame surface densities from two-dimensional measurements assuming that (1) the flow is statistically two dimensional; (2) the measuring plane is a plane of symmetry of the mean flow, either by translation (homogeneous third direction as in slot burners for example) or by rotation (axi-symmetrical flows such as jets) and (3) flame movements in transverse directions are similar. The unknown flame front wrinkling is then modelled from known quantities. An excellent agreement is achieved against direct numerical simulation (DNS) data where all three-dimensional quantities are known, but validations in other conditions (larger DNS, experiments) are required.
NASA Astrophysics Data System (ADS)
Vancamberg, Laurence; Geeraert, Nausikaa; Iordache, Razvan; Palma, Giovanni; Klausz, Rémy; Muller, Serge
2011-03-01
Needle insertion planning for digital breast tomosynthesis (DBT) guided biopsy has the potential to improve patient comfort and intervention safety. However, a relevant planning should take into account breast tissue deformation and lesion displacement during the procedure. Deformable models, like finite elements, use the elastic characteristics of the breast to evaluate the deformation of tissue during needle insertion. This paper presents a novel approach to locally estimate the Young's modulus of the breast tissue directly from the DBT data. The method consists in computing the fibroglandular percentage in each of the acquired DBT projection images, then reconstructing the density volume. Finally, this density information is used to compute the mechanical parameters for each finite element of the deformable mesh, obtaining a heterogeneous DBT based breast model. Preliminary experiments were performed to evaluate the relevance of this method for needle path planning in DBT guided biopsy. The results show that the heterogeneous DBT based breast model improves needle insertion simulation accuracy in 71% of the cases, compared to a homogeneous model or a binary fat/fibroglandular tissue model.
Two-component wind fields from scanning aerosol lidar and motion estimation algorithms
NASA Astrophysics Data System (ADS)
Mayor, Shane D.; Dérian, Pierre; Mauzey, Christopher F.; Hamada, Masaki
2013-09-01
We report on the implementation and testing of a new wavelet-based motion estimation algorithm to estimate horizontal vector wind fields in real-time from horizontally-scanning elastic backscatter lidar data, and new experimental results from field work conducted in Chico, California, during the summer of 2013. We also highlight some limitations of a traditional cross-correlation method and compare the results of the wavelet-based method with those from the cross-correlation method and wind measurements from a Doppler lidar.
Robust estimation of mammographic breast density: a patient-based approach
NASA Astrophysics Data System (ADS)
Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas
2012-02-01
Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).
Density estimation in aerial images of large crowds for automatic people counting
NASA Astrophysics Data System (ADS)
Herrmann, Christian; Metzler, Juergen
2013-05-01
Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.
Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R. (Sandia National Laboratories, Department 1332, Albuquerque, New Mexico 87185-1083 (United States))
1994-04-11
A simple method is described that combines conventional threshold-voltage and charge-pumping measurements on [ital n]- and [ital p]-channel metal-oxide-semiconductor (MOS) transistors to estimate radiation-induced oxide-, interface-, and border-trap charge densities. In some devices, densities of border traps (near-interfacial oxide traps that exchange charge with the underlying Si) approach or exceed the density of interface traps, emphasizing the need to distinguish border-trap contributions to MOS radiation response and long-term reliability from interface-trap contributions. Estimates of border-trap charge densities obtained via this new dual-transistor technique agree well with trap densities inferred from 1/[ital f] noise measurements for transistors with varying channel length.
Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study
NASA Technical Reports Server (NTRS)
Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.
2010-01-01
This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.
mBEEF: an accurate semi-local Bayesian error estimation density functional.
Wellendorff, Jess; Lundgaard, Keld T; Jacobsen, Karsten W; Bligaard, Thomas
2014-04-14
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations. PMID:24735288
Estimating basin thickness using a high-density passive-source geophone array
NASA Astrophysics Data System (ADS)
O'Rourke, C. T.; Sheehan, A. F.; Erslev, E. A.; Miller, K. C.
2014-09-01
In 2010 an array of 834 single-component geophones was deployed across the Bighorn Mountain Range in northern Wyoming as part of the Bighorn Arch Seismic Experiment (BASE). The goal of this deployment was to test the capabilities of these instruments as recorders of passive-source observations in addition to active-source observations for which they are typically used. The results are quite promising, having recorded 47 regional and teleseismic earthquakes over a two-week deployment. These events ranged from magnitude 4.1 to 7.0 (mb) and occurred at distances up to 10°. Because these instruments were deployed at ca. 1000 m spacing we were able to resolve the geometries of two major basins from the residuals of several well-recorded teleseisms. The residuals of these arrivals, converted to basinal thickness, show a distinct westward thickening in the Bighorn Basin that agrees with industry-derived basement depth information. Our estimates of thickness in the Powder River Basin do not match industry estimates in certain areas, likely due to localized high-velocity features that are not included in our models. Thus, with a few cautions, it is clear that high-density single-component passive arrays can provide valuable constraints on basinal geometries, and could be especially useful where basinal geometry is poorly known.
Dunn, K. L.; Wilson, P. P. H. [Department of Engineering Physics, University of Wisconsin - Madison, 1500 Engineering Drive, Madison, WI 53706 (United States)
2013-07-01
A new Monte Carlo mesh tally based on a Kernel Density Estimator (KDE) approach using integrated particle tracks is presented. We first derive the KDE integral-track estimator and present a brief overview of its implementation as an alternative to the MCNP fmesh tally. To facilitate a valid quantitative comparison between these two tallies for verification purposes, there are two key issues that must be addressed. The first of these issues involves selecting a good data transfer method to convert the nodal-based KDE results into their cell-averaged equivalents (or vice versa with the cell-averaged MCNP results). The second involves choosing an appropriate resolution of the mesh, since if it is too coarse this can introduce significant errors into the reference MCNP solution. After discussing both of these issues in some detail, we present the results of a convergence analysis that shows the KDE integral-track and MCNP fmesh tallies are indeed capable of producing equivalent results for some simple 3D transport problems. In all cases considered, there was clear convergence from the KDE results to the reference MCNP results as the number of particle histories was increased. (authors)
Simultaneous Estimation of Depth, Density, and Water Equivalent of Snow using a Mobile GPR Setup
NASA Astrophysics Data System (ADS)
Jonas, T.; Griessinger, N.; Gindraux, S.
2014-12-01
Terrestrial and airborne laser scanning of snow has significantly increased our ability to improve our understanding of the spatial variability of snow depth. However, methods to provide corresponding datasets of snow water equivalent of similar quality are unavailable to date. Similar to laser scan technology, ground penetration radar (GPR) has become more accessible to snow researchers and is currently successfully used in the context of snow hydrological studies. GPR systems can be used and set up in different ways to measure snow properties. In this study we elaborate on a mobile GPR system that allows simultaneous estimation of snow depth, density, water equivalent in a snow survey setting. For this purpose we have built a GPR platform around a sledge system with four antenna pairs set up as a common-mid-point array and a separate fifth antenna pair dedicated to analyze the frequency change of the radar signal when propagating through the snowpack. Liquid water content can be accounted for by assessing the frequency dependent attenuation of the radar signal. We will present data from field campaigns that were carried out in 2013 and 2014 to test the ability of our GPR system to estimate snow bulk properties along several test transects. Along with the results, we will discuss system configuration and post-processing issues.
mBEEF: An accurate semi-local Bayesian error estimation density functional
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas
2014-04-01
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.
A wavelet-based adaptive filter for removing ECG interference in EMGdi signals.
Zhan, Choujun; Yeung, Lam Fat; Yang, Zhi
2010-06-01
Diaphragmatic electromyogram (EMGdi) signals convey important information on respiratory diseases. In this paper, an adaptive filter for removing the electrocardiographic (ECG) interference in EMGdi signals based on wavelet theory is proposed. Power spectrum analysis was performed to evaluate the proposed method. Simulation results show that the power spectral density (PSD) of the extracted EMGdi signal from an ECG corrupted signal is within 1.92% average error relative to the original EMGdi signal. Testing on clinical EMGdi data confirm that this method is also efficient in removing ECG artifacts from the corrupted clinical EMGdi signal. PMID:19692270
On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds
NASA Astrophysics Data System (ADS)
Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua
2015-02-01
We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions for which there can be uniform resolvent estimates for , , where is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from to the spectrum of . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for . Ours is based on the related Hadamard parametrices for , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.
Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie
2015-09-01
Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609
NASA Astrophysics Data System (ADS)
Augeard, B.; Assouline, S.; Fonty, A.; Kao, C.; Vauclin, M.
2007-07-01
SummarySoil and surface seal hydraulic properties were determined from simulated rainfall experiments by the inverse method applied to the Richards equation. Measurements used for the estimation include the soil water pressure head versus time at two distances from the soil surface, the transient infiltration rate at the soil surface and the drainage rates at the bottom of the soil profile. Seal properties were evaluated using a model that simulates changes in the seal bulk density with respect to time and space. Uncertainties, correlations and sensitivities of the soil and seal parameters were quantified to evaluate the accuracy of the model estimation and to compare the information content of each measurement type to parameter estimations. It appears that the uncertainties related to three seal parameter estimations, namely the parameter related to the dynamics of seal formation, the modelled seal thickness and the initial bulk density, were larger than 50% of the parameter values, because of the low sensitivity of the model to them and their multiple correlations. In addition to seal hydraulic parameter estimation, bulk density profiles of the soil surface were measured after the rainfall simulations using the X-ray method. The exponential-decay shape assumed in the soil surface seal model was found to correctly reproduce the measured distribution of bulk density with depth. However, the measurements showed a less developed seal than that suggested by the bulk density profile estimated from rainfall experiments. Finally, bulk density measurements were used as given input parameters of the model. Setting the initial bulk density and its maximal change over time at the measured values greatly decreased the seal parameter uncertainties. The method proposed could be used to improve the experimental design used to quantify the seal's hydraulic properties using inverse techniques.
Adib, Mani; Cretu, Edmond
2013-01-01
We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of ?1.625?dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786
Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.
2002-01-01
The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.
NASA Astrophysics Data System (ADS)
Kurita, Yutaka; Kjesbu, Olav S.
2009-02-01
This paper explores why the 'Auto-diametric method', currently used in many laboratories to quickly estimate fish fecundity, works well on marine species with a determinate reproductive style but much less so on species with an indeterminate reproductive style. Algorithms describing links between potentially important explanatory variables to estimate fecundity were first established, and these were followed by practical observations in order to validate the method under two extreme situations: 1) straightforward fecundity estimation in a determinate, single-batch spawner: Atlantic herring (AH) Clupea harengus and 2) challenging fecundity estimation in an indeterminate, multiple-batch spawner: Japanese flounder (JF) Paralichthys olivaceus. The Auto-diametric method relies on the successful prediction of the number of vitellogenic oocytes (VTO) per gram ovary (oocyte packing density; OPD) from the mean VTO diameter. Theoretically, OPD could be reproduced by the following four variables; OD V (volume-based mean VTO diameter, which deviates from arithmetic mean VTO diameter), VFvto (volume fraction of VTO in the ovary), ?o (specific gravity of the ovary) and k (VTO shape, i.e. ratio of long and short oocyte axes). VF vto, ? o and k were tested in relation to growth in OD V. The dynamic range throughout maturation was clearly highest in VF vto. As a result, OPD was mainly influenced by OD V and secondly by VFvto. Log (OPD) for AH decreased as log (OD V) increased, while log (OPD) for JF first increased during early vitellogenesis, then decreased during late vitellogenesis and spawning as log (OD V) increased. These linear regressions thus behaved statistically differently between species, and associated residuals fluctuated more for JF than for AH. We conclude that the OPD-OD V relationship may be better expressed by several curves that cover different parts of the maturation cycle rather than by one curve that cover all these parts. This seems to be particularly true for indeterminate spawners. A correction factor for vitellogenic atresia was included based on the level of atresia and the size of atretic oocytes in relation to normal oocytes finding that OPD would be biased when smaller atretic oocytes are present but not accounted for. Furthermore, special care should be taken when collecting sub-samples to make them as representative as possible of the whole ovary, including in terms of relative amount of ovarian wall and stroma. Theoretical consideration, along with original, high-quality information regarding the above-listed variables made it possible to reproduce very accurately the observed changes in OPD, but not yet precisely enough at the individual level in indeterminate spawners.
Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.
1982-12-16
Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.
The EM Method in a Probabilistic Wavelet-Based MRI Denoising.
Martin-Fernandez, Marcos; Villullas, Sergio
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
The EM Method in a Probabilistic Wavelet-Based MRI Denoising
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts
NASA Technical Reports Server (NTRS)
Grau, David
2012-01-01
This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.
Lattice potential energy estimation for complex ionic salts from density measurements.
Jenkins, H Donald Brooke; Tudela, David; Glasser, Leslie
2002-05-01
This paper is one of a series exploring simple approaches for the estimation of lattice energy of ionic materials, avoiding elaborate computation. The readily accessible, frequently reported, and easily measurable (requiring only small quantities of inorganic material) property of density, rho(m), is related, as a rectilinear function of the form (rho(m)/M(m))(1/3), to the lattice energy U(POT) of ionic materials, where M(m) is the chemical formula mass. Dependence on the cube root is particularly advantageous because this considerably lowers the effects of any experimental errors in the density measurement used. The relationship that is developed arises from the dependence (previously reported in Jenkins, H. D. B.; Roobottom, H. K.; Passmore, J.; Glasser, L. Inorg. Chem. 1999, 38, 3609) of lattice energy on the inverse cube root of the molar volume. These latest equations have the form U(POT)/kJ mol(-1) = gamma(rho(m)/M(m))(1/3) + delta, where for the simpler salts (i.e., U(POT)/kJ mol(-1) < 5000 kJ mol(-1)), gamma and delta are coefficients dependent upon the stoichiometry of the inorganic material, and for materials for which U(POT)/kJ mol(-1) > 5000, gamma/kJ mol(-1) cm = 10(-7) AI(2IN(A))(1/3) and delta/kJ mol(-1) = 0 where A is the general electrostatic conversion factor (A = 121.4 kJ mol(-1)), I is the ionic strength = 1/2 the sum of n(i)z(i)(2), and N(A) is Avogadro's constant. PMID:11978099
Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites
Mitchard, Edward T A; Feldpausch, Ted R; Brienen, Roel J W; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R; Lewis, Simon L; Lloyd, Jon; Quesada, Carlos A; Gloor, Manuel; ter Steege, Hans; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragão, Luiz E O C; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I; Cerón, Carlos E; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R C; Di Fiore, Anthony; Domingues, Tomas F; Erwin, Terry L; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N; Killeen, Tim J; Laurance, William F; Levis, Carolina; Magnusson, William E; Marimon, Beatriz S; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T; Neill, David; Núñez Vargas, Mario P; Palacios, Walter A; Parada, Alexander; Pardo Molina, Guido; Peña-Claros, Marielos; Pitman, Nigel; Peres, Carlos A; Poorter, Lourens; Prieto, Adriana; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H; Rudas, Agustin; Salomão, Rafael P; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F; Steininger, Marc K; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R; van der Heijden, Geertje M F; Vieira, Ima C G; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A; Wang, Ophelia; Zartman, Charles E; Malhi, Yadvinder; Phillips, Oliver L
2014-01-01
Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass directly, and thus may miss significant spatial variations in forest structure. We test the stated accuracy of pantropical carbon maps using a large independent field dataset. Location Tropical forests of the Amazon basin. The permanent archive of the field plot data can be accessed at: http://dx.doi.org/10.5521/FORESTPLOTS.NET/2014_1 Methods Two recent pantropical RS maps of vegetation carbon are compared to a unique ground-plot dataset, involving tree measurements in 413 large inventory plots located in nine countries. The RS maps were compared directly to field plots, and kriging of the field data was used to allow area-based comparisons. Results The two RS carbon maps fail to capture the main gradient in Amazon forest carbon detected using 413 ground plots, from the densely wooded tall forests of the north-east, to the light-wooded, shorter forests of the south-west. The differences between plots and RS maps far exceed the uncertainties given in these studies, with whole regions over- or under-estimated by >?25%, whereas regional uncertainties for the maps were reported to be density and allometry to create maps suitable for carbon accounting. The use of single relationships between tree canopy height and above-ground biomass inevitably yields large, spatially correlated errors. This presents a significant challenge to both the forest conservation and remote sensing communities, because neither wood density nor species assemblages can be reliably mapped from space.
NASA Astrophysics Data System (ADS)
Vaezi, Y.; van der Baan, M.
2014-05-01
Reliability of microseismic interpretations is very much dependent on how robustly microseismic events are detected and picked. Various event detection algorithms are available but detection of weak events is a common challenge. Apart from the event magnitude, hypocentral distance, and background noise level, the instrument self-noise can also act as a major constraint for the detection of weak microseismic events in particular for borehole deployments in quiet environments such as below 1.5-2 km depths. Instrument self-noise levels that are comparable or above background noise levels may not only complicate detection of weak events at larger distances but also challenge methods such as seismic interferometry which aim at analysis of coherent features in ambient noise wavefields to reveal subsurface structure. In this paper, we use power spectral densities to estimate the instrument self-noise for a borehole data set acquired during a hydraulic fracturing stimulation using modified 4.5-Hz geophones. We analyse temporal changes in recorded noise levels and their time-frequency variations for borehole and surface sensors and conclude that instrument noise is a limiting factor in the borehole setting, impeding successful event detection. Next we suggest that the variations of the spectral powers in a time-frequency representation can be used as a new criterion for event detection. Compared to the common short-time average/long-time average method, our suggested approach requires a similar number of parameters but with more flexibility in their choice. It detects small events with anomalous spectral powers with respect to an estimated background noise spectrum with the added advantage that no bandpass filtering is required prior to event detection.
William T. Friedewald; Robert I. Levy; Donald S. Fredrickson
1972-01-01
A method for estimating the cholesterol content of the serum low-density lipoprotein fraction (Sf- 0.20)is presented. The method involves measure- ments of fasting plasma total cholesterol, tri- glyceride, and high-density lipoprotein cholesterol concentrations, none of which requires the use of the preparative ultracentrifuge. Cornparison of this suggested procedure with the more direct procedure, in which the ultracentrifuge is used, yielded
M. H. Wafy; A. S. Brierley; J. L. Watkins
This paper presents Maximum Entropy (MaxEnt) reconstructions of krill distribution and estimates of mean krill density within two survey boxes (dimensions 80 km × 100 km) north of South Georgia. The reconstructions were generated from line-transect acoustic survey data gathered in the boxes during austral summers from 1996 to 2000. Krill densities had previously been determined at approximately 0.5 km
Bíl, Michal; Andrášik, Richard; Janoška, Zbyn?k
2013-06-01
This paper proposes a procedure which evaluates clusters of traffic accident and organizes them according to their significance. The standard kernel density estimation was extended by statistical significance testing of the resulting clusters of the traffic accidents. This allowed us to identify the most important clusters within each section. They represent places where the kernel density function exceeds the significance level corresponding to the 95th percentile level, which is estimated using the Monte Carlo simulations. To show only the most important clusters within a set of sections, we introduced the cluster strength and cluster stability evaluation procedures. The method was applied in the Southern Moravia Region of the Czech Republic. PMID:23567216
Wavelet-based statistical approach for speckle reduction in medical ultrasound images.
Gupta, S; Chauhan, R C; Sexana, S C
2004-03-01
A novel speckle-reduction method is introduced, based on soft thresholding of the wavelet coefficients of a logarithmically transformed medical ultrasound image. The method is based on the generalised Gaussian distributed (GGD) modelling of sub-band coefficients. The method used was a variant of the recently published BayesShrink method by Chang and Vetterli, derived in the Bayesian framework for denoising natural images. It was scale adaptive, because the parameters required for estimating the threshold depend on scale and sub-band data. The threshold was computed by Ksigma2/sigma(x), where sigma and sigma(x) were the standard deviation of the noise and the sub-band data of the noise-free image, respectively, and K was a scale parameter. Experimental results showed that the proposed method outperformed the median filter and the homomorphic Wiener filter by 29% in terms of the coefficient of correlation and 4% in terms of the edge preservation parameter. The numerical values of these quantitative parameters indicated the good feature preservation performance of the algorithm, as desired for better diagnosis in medical image processing. PMID:15125148
Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.
2004-01-01
Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.
Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Simons, F. J.; Bozdag, E.
2014-12-01
We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.
Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C
2011-09-01
An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. PMID:21798775
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.
Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.
2005-01-01
The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.
NASA Astrophysics Data System (ADS)
Nakano, S.; Fok, M.-C.; Brandt, P. C.; Higuchi, T.
2014-05-01
We have developed a technique by which to estimate the spatial distribution of plasmaspheric helium ions based on extreme ultraviolet (EUV) data obtained from the IMAGE satellite. The estimation is performed using a linear inversion method based on the Bayesian approach. The global imaging data from the IMAGE satellite enable us to estimate a global two-dimensional distribution of the helium ions in the plasmasphere. We applied this technique to a synthetic EUV image generated from a numerical model. This technique was confirmed to successfully reproduce the helium ion density that generated the synthetic EUV data. We also demonstrate how the proposed technique works for real data using two real EUV images.
A model-based meta-analysis for estimating species-specific wood density and identifying potential
Lichstein, Jeremy W.
and fitness, and wood density (WD) is a key trait linked to mechanical stability, growth rates and drought well-constrained WD estimates for 305 tree species, which may be useful for tree growth and forest rates such as survival, growth, reproduction and ultimately, fitness (Ackerly 2003). Functional traits
Cushman, J. Hall
that the expected carbon sequestration can be threatened by regional scale distur- bances including insectSpatial estimation of the density and carbon content of host populations for Phytophthora ramorum May 2011 Available online 16 June 2011 Keywords: Forest inventory Landscape epidemiology Tree
JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...
Jin, Jiashun
of non-null effects, rate of convergence, two-point argument. AMS 2000 subject classifications: Primary) be independent Bernoulli( ) variables, where (0, 1) and j = 0 indicates that the null hypothesis Hj is trueOptimal Rates of Convergence for Estimating the Null Density and Proportion of Non-Null Effects
Barrash, Warren
Hydrological parameter estimations from a conservative tracer test with variable-density effects of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical of geophysical and hydrological information. The information contained in this experiment is evaluated through
NASA Astrophysics Data System (ADS)
Seabolt, M. A.; Chourasia, A. R.
2001-10-01
X-ray photoelectron spectroscopy has been utilized to study the changes in the electronic structure of titanium and nickel in crystalline titanium-nickel compounds containing 48%, 50% and 51.5% atomic percentage of nickel. In XPS, the emerging photoelectron carries the signature of the density of states at the Fermi level. To estimate the density of states, the Ti 2p core level and Ni 2p core level have been studied in all of these samples. The background under these core level peaks have been estimated using the Shirley and Tougaard models. The intrinsic loss determined from these models has been found to correlate with the density of states at the Fermi level. Details of the investigation will be presented.
Estimation of refractive index and density of lubricants under high pressure by Brillouin scattering
NASA Astrophysics Data System (ADS)
Nakamura, Y.; Fujishiro, I.; Kawakami, H.
1994-07-01
Employing a diamond-anvil cell, Brillouin scattering spectra of 90° and 180° angles for synthetic lubricants (paraffinic and naphthenic oils) were measured and sound velocity, density, and refractive index under high pressure were obtained. The density obtained from the thermodynamic relation was compared with that from Lorentz-Lorentz's formula. The density was also compared with Dowson's density-pressure equation of lubricants, and density-pressure characteristics of the paraffinic oil and naphthenic oil were described considering the molecular structure for solidified lubricants. The effect of such physical properties of lubricants on the elastohydrodynamic lubrication of ball bearings, gears and traction drives was considered.
Estimation of tool pose based on force-density correlation during robotic drilling.
Williamson, Tom M; Bell, Brett J; Gerber, Nicolas; Salas, Lilibeth; Zysset, Philippe; Caversaccio, Marco; Weber, Stefan
2013-04-01
The application of image-guided systems with or without support by surgical robots relies on the accuracy of the navigation process, including patient-to-image registration. The surgeon must carry out the procedure based on the information provided by the navigation system, usually without being able to verify its correctness beyond visual inspection. Misleading surrogate parameters such as the fiducial registration error are often used to describe the success of the registration process, while a lack of methods describing the effects of navigation errors, such as those caused by tracking or calibration, may prevent the application of image guidance in certain accuracy-critical interventions. During minimally invasive mastoidectomy for cochlear implantation, a direct tunnel is drilled from the outside of the mastoid to a target on the cochlea based on registration using landmarks solely on the surface of the skull. Using this methodology, it is impossible to detect if the drill is advancing in the correct direction and that injury of the facial nerve will be avoided. To overcome this problem, a tool localization method based on drilling process information is proposed. The algorithm estimates the pose of a robot-guided surgical tool during a drilling task based on the correlation of the observed axial drilling force and the heterogeneous bone density in the mastoid extracted from 3-D image data. We present here one possible implementation of this method tested on ten tunnels drilled into three human cadaver specimens where an average tool localization accuracy of 0.29 mm was observed. PMID:23269744
NASA Astrophysics Data System (ADS)
Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.
2014-12-01
Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ?105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.
NASA Astrophysics Data System (ADS)
Xu, Zhonghua; Zhu, Lie; Sojka, Jan; Kokoszka, Piotr; Jach, Agnieszka
2008-08-01
A wavelet-based index of storm activity (WISA) has been recently developed [Jach, A., Kokoszka, P., Sojka, L., Zhu, L., 2006. Wavelet-based index of magnetic storm activity. Journal of Geophysical Research 111, A09215, doi:10.1029/2006JA011635] to complement the traditional Dst index. The new index can be computed automatically by using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a higher temporal resolution (1 min), which can provide a better description of the dynamical variations of magnetic storms. In this work, we perform a systematic assessment study on the WISA index. First, we statistically compare the WISA to the Dst for various quiet and disturbed periods and analyze the differences of their spectral features. Then we quantitatively assess the flexibility of the WISA on data stretch and study the effects of varying number of stations on the index. In addition, the ability of the WISA for handling the missing data is also quantitatively assessed. The assessment results show that the hourly averaged WISA index can describe storm activities equally well as the Dst index, but its full automation, high flexibility on data stretch, easiness of using the data from varying number of stations, high temporal resolution, and high tolerance to missing data from individual station can be very valuable and essential for real-time monitoring of the dynamical variations of magnetic storm activities and space weather applications, thus significantly complementing the existing Dst index.
NASA Astrophysics Data System (ADS)
Xu, Z.; Zhu, L.; Sojka, J. J.; Kokoszka, P.; Jach, A.
2006-12-01
A wavelet-based index of storm activities (WISA) has been recently developed (Jach et al., 2006) to complement the traditional Dst index. The new index can be computed automatically using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a higher temporal resolution (one minute), which can provide a better description of the dynamical variations of magnetic storms. In this work, we perform a systematic assessment study on the WISA index. First, we statistically compare the WISA to the Dst for various quiet and disturbing periods and analyze the differences of their spectrum features. Then we quantitatively assess the flexibility of the WISA on data stretch and study the effects of varying number of stations on the index. In addition, how well the WISA can handle the missing data is also quantitatively assessed. The assessment results show that the hourly-averaged WISA index can describe storm activities equally well as the Dst index, but its full automation, high flexibility on data stretch, easiness of using the data from varying number of stations, high temporal resolution, and high tolerance on missing data from individual station can be very valuable and essential for real-time monitoring of the dynamical variations of magnetic storm activities and space weather applications, thus significantly complementing the existing Dst index. Jach, A., P. Kokoszka, J. Sojka, and L. Zhu, Wavelet-based index of magnetic storm activity, J. Geophys. Res., in press, 2006.
NASA Astrophysics Data System (ADS)
Zein, Samir; Poor Kalhor, Mahboubeh; Chibotaru, Liviu F.; Chermette, Henry
2009-12-01
Modern density functionals were assessed for the calculation of magnetic exchange constants of academic hydrogen oligomer systems. Full-configuration interaction magnetic exchange constants and wavefunctions are taken as references for several Hn model systems with different geometrical distributions from Ciofini et al. [Chem. Phys. 309, 133 (2005)]. Regression analyses indicate that hybrid functionals (B3LYP, O3LYP, and PBE0) rank among the best ones with a slope of typically 0.5, i.e., 100% overestimation with a standard error of about 50 cm-1. The efficiency of the highly ranked functionals for predicting the correct "exact states" (after diagonalization of the Heisenberg Hamiltonian) is validated, and a statistical standard error is assigned for each functional. The singular value decomposition approach is used for treating the overdetermination of the system of equations when the number of magnetic centers is greater than 3. Further discussions particularly about the fortuitous success of the Becke00-x-only functional for treating hydrogenic models are presented.
NASA Astrophysics Data System (ADS)
McCreight, James L.; Small, Eric E.; Larson, Kristine M.
2014-08-01
Geodetic-quality GPS systems can be used to measure average snow depth in the ˜1000 m2 area around the GPS antenna, a sensing footprint size intermediate between in situ and satellite observations. SWE can be calculated from density estimates modeled on the GPS-based snow depth time series. We assess the accuracy of GPS-based snow depth, density, and SWE data at 18 GPS sites via comparison to manual observations. The manual validation survey was completed around the time of peak accumulation at each site. Daily snow depth derived from GPS reflection data is very similar to the mean snow depth measured manually in the ˜1000 m2 scale area around each antenna. This comparison spans site-averaged depths from 0 to 150 cm. The GPS depth data exhibit a small negative bias (-6 cm) across this range of snow depths. Errors tend to be smaller at sites with more usable GPS ground tracks. Snow bulk density is modeled using the GPS snow depth time series and model parameters are estimated from nearby SNOTEL sites. Modeled density is within 0.02 g cm-3 of the density measured in a single snow pit at the validation sites, for 12 of 18 comparisons. GPS-based depth and modeled density are multiplied to estimate SWE. SWE estimates are very accurate over the range observed at the validation sites, from 0 to 60 cm (R2 = 0.97 and bias = -2 cm). These results show that the near real-time GPS snow products have errors small enough for monitoring water resources in snow-dominated basins.
NASA Astrophysics Data System (ADS)
Liu, Z.; Lundgren, P.; Rosen, P. A.; Agram, P.
2013-12-01
Accurate imaging of deformation processes in plate boundary zones at various space-time scales is crucial to advancing our knowledge of plate boundary tectonics and volcano dynamics. Space-borne geodetic measurements such as interferometric synthetic aperture radar (InSAR) and continuous GPS (CGPS) provide complementary measurements of surface deformation. InSAR provides the line-of-sight measurements that are spatially dense but temporally coarse while point-based GPS measurements provide 3-D displacement components at sub-daily to daily temporal interval but are limited when trying to resolve fine-scale deformation processes depending on station distribution and spacing. The large volume of SAR data from existing satellite platforms and future SAR missions and GPS time series from large-scale CGPS networks (e.g, Earthscope/PBO) call for efficient approaches to integrate these two data for maximal extraction of the signal of interest and imaging time-variable deformation processes. We present a wavelet based spatiotemporal filtering approach to integrate InSAR and GPS data at multi-scale level in space and time. The approach consists of a series of InSAR noise correction modules that are based on wavelet multi-resolution analysis (MRA) for correcting major noise components in InSAR images and the InSAR time series analysis that combines MRA and small baseline least-squares inversion with temporal filtering (wavelet or Kalman filter based) to filter out turbulent troposphere noise. It also exploits a novel way that considers temporal correlation between InSAR and GPS time series at a multi-scale level and reconstruct surface deformation measurements in dense spatial and temporal sampling. Compared to other approaches, this approach does not require a priori parameterization of temporal behaviors and provides a general way to discover signals of interest at different spatiotemporal scales. We present test cases where known signals with realistic noise components are synthesized for analysis and comparison. We are in the process of improving the approach and generalizing it to real-world applications.
NASA Astrophysics Data System (ADS)
Bourlon, Evelise
This thesis presents a geophysical study of the Canadian Shield using gravity and magnetic data. The first part is about the methodology. Standard methods and wavelet-based methods are presented. A method to characterize the causative sources of the field anomalies is described. A wavelet method to compute the elastic thickness of the lithosphere is presented. The second part concerns the applications on geophysical data from the Canadian Shield. A study of the continuation of the Proterozoic Trans-Hudson orogen and the Archean eastern Superior province features under the sedimentary cover of the Williston basin in Central Canada is the subject of a chapter. We produce maps of gravity and magnetic fields for a visual interpretation of the geological structures. Details were enhanced by the way of horizontal derivatives of fields. We studied fields at different scales with the wavelet transform. A depth to magnetic basement map has been produced using Euler's deconvolution. We have shown that some structures of the Trans-Hudson orogen in northern Manitoba and Saskatchewan extend at least as far south as the U.S. border and that the Superior subprovinces extend westward under the sedimentary cover in Manitoba. We looked at two tectonic structures: a contact between two geological provinces and a major fault. We have determined their positions and we have characterized their vertical extensions and their dips. The following chapter concerns elastic thickness calculation in eastern Canadian Shield. We calculated this thickness with standard methods and we have shown that the lithosphere in Quebec and Labrador is very strong. We developed a method based on wavelet transform to study the anisotropy of this parameter. This method has shown that the rigidity is highly anisotropic in the Superior province whereas it tends to be more isotropic in the Grenville province. In the last chapter, we present a study of the gravity and magnetic fields in Ungava Bay. We have mapped potential fields from a compilation of data of different origins: land and satellite data for the gravity field and airborne and shipborne for the magnetic field. The interpretation of these maps leads to the conclusion that a small part of the Superior province was rifted away to the east of the New-Quebec orogen and that several geological structures seen in Labrador extend across the Ungava Bay to Baffin Island. This chapter has been published in a synthesis volume of the ECSOOT (Eastern Canadian Shield Onshore-Offshore Transect) project in the Canadian Journal of Earth Sciences.
2014-01-01
Background Microscopic examination using Giemsa-stained thick blood films remains the reference standard for detection of malaria parasites and it is the only method that is widely and practically available for quantifying malaria parasite density. There are few published data (there was no study during pregnancy) investigating the parasite density (ratio of counted parasites within a given number of microscopic fields against counted white blood cells (WBCs) using actual number of WBCs. Methods Parasitaemia was estimated using assumed WBCs (8,000), which was compared to parasitaemia calculated based on each woman’s WBCs in 98 pregnant women with uncomplicated Plasmodium falciparum malaria at Medani Maternity Hospital, Central Sudan. Results The geometric mean (SD) of the parasite count was 12,014.6 (9,766.5) and 7,870.8 (19,168.8) ring trophozoites /?l, P <0.001 using the actual and assumed (8,000) WBC count, respectively. The median (range) of the ratio between the two parasitaemias (using assumed/actual WBCs) was 1.5 (0.6-5), i e, parasitaemia calculated assuming WBCs equal to median (range) 1.5 (0.6-5) times higher than parasitaemia calculated using actual WBCs. There were 52 out of 98 patients (53%) with ratio between 0.5 and 1.5. For 21 patients (21%) this ratio was higher than 2, and for five patients (5%) it was higher than 3. Conclusion The estimated parasite density using actual WBC counts was significantly lower than the parasite density estimated using assumed WBC counts. Therefore, it is recommended to use the patient`s actual WBC count in the estimation of the parasite density. PMID:24386962
2012-01-01
Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG) more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1) the area between QRS offset and T-peak points, 2) the normalized and signed sum from QRS offset to effective zero voltage point, and 3) the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE) and support vector machine (SVM) methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical values of the parameters to be supplied in advance. In the case of the SVM classifier, one has to select a single parameter. PMID:22703641
NASA Technical Reports Server (NTRS)
Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.
1995-01-01
To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.
Error estimates for Rayleigh scattering density and temperature measurements in premixed flames
I. Namer; R. W. Schefer
1985-01-01
Rayleigh scattering has become an accepted technique for the determination of total number density during the combustion process. The interpretation of the ratio of total Rayleigh scattering signal as a ratio of densities or temperatures is hampered by the changing composition through a flame, since the average Rayleigh scattering cross-section depends on the gas composition. Typical correction factors as a
An Estimate of Electron Densities in the Exosphere by Means of Nose Whistlers
Joseph H. Pope
1961-01-01
The nose whistler dispersion equation was numerically integrated using the following assumed functions for the electron density distribution: (1) N = K. (2) N = KR -3. (3) N = KR-' exp(3.03\\/R). N is the electron number density, R is the distance from the earth's center, and K a constant of proportionality. Several whistlers that were received at College on
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
A field comparison of nested grid and trapping web density estimators
Jett, D.A.; Nichols, J.D.
1987-01-01
The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.
Power spectral density estimation by spline smoothing in the frequency domain.
NASA Technical Reports Server (NTRS)
De Figueiredo, R. J. P.; Thompson, J. R.
1972-01-01
An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-
Power spectral density estimation by spline smoothing in the frequency domain
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Thompson, J. R.
1972-01-01
An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.
Jaffé, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Rúa, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A
2010-04-01
Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation. PMID:19775273
Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range
NASA Technical Reports Server (NTRS)
Krupenio, N. N.
1977-01-01
The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.
Extending estimation of C-J pressure of explosives to the very low density region
Cooper, P.W.
1992-01-01
A previous paper showed that for condensed phase explosives, the C-J density of the detonation product gases correlates to the initial density of the unreacted explosive by a simple power function. This paper extends that correlation to the very low density region which includes detonation of suspended particles of explosives in air as well as gas phase detonations. Extending this correlation of experimental data by an additional three orders of magnitude caused a slight change in the empirical constants of the correlation.
Extending estimation of C-J pressure of explosives to the very low density region
Cooper, P.W.
1992-07-01
A previous paper showed that for condensed phase explosives, the C-J density of the detonation product gases correlates to the initial density of the unreacted explosive by a simple power function. This paper extends that correlation to the very low density region which includes detonation of suspended particles of explosives in air as well as gas phase detonations. Extending this correlation of experimental data by an additional three orders of magnitude caused a slight change in the empirical constants of the correlation.
C. Hebeisen; J. Fattebert; E. Baubet; C. Fischer
2008-01-01
We estimated wild boar abundance and density using capture–resight methods in the western part of the Canton of Geneva (Switzerland)\\u000a in the early summer from 2004 to 2006. Ear-tag numbers and transmitter frequencies enabled us to identify individuals during\\u000a each of the counting sessions. We used resights generated by self-triggered camera traps as recaptures. Program Noremark provided\\u000a Minta–Mangel and Bowden’s
NASA Astrophysics Data System (ADS)
Mikuška, J.; Marušiak, I.; Zahorec, P.; Pap?o, J.; Pasteka, R.; Bielik, M.
2014-12-01
It has been well known that free-air anomalies and gravitational effects of the topographic masses are mutually proportional, at least in general. However, it is rather intriguing that this feature is more remarkable in elevated mountainous areas than in lowlands or flat regions, as we demonstrate on practical examples. Further, since the times of Pierre Bouguer we know that gravitational effect of the topographic masses is station-height-dependent. In our presentation we show that the respective contributions to this height dependence, although they are nonzero, are less significant in the cases of both the nearest masses and the more remote ones while the contribution of the masses within hundreds and thousands of meters from the gravity station is dominant. We also illustrate that, surprisingly, gravitational effects of the non-near topographic masses can be apparently independent on their respective volumes, while their gravitational effects are still well proportional to the gravity station heights. On the other hand, based on interpretational reasons, Bouguer anomaly should not correlate very much with the heights of the measuring points or, more specifically, with the gravitational effect of the topographic masses. Standard practice is to estimate a suitable (uniform) reduction or correction density within the study area in order to minimize such an undesired correlation and, vice versa, the minimum correlation is often utilized as a criteria for estimating such density. Our main objective is to point out, from the aspect of the correction density estimations, that the contributions of the topographic masses should be viewed alternatively, depending on the particular distances of the respective portions of those masses from the gravity station. We have tested majority of the existing methods of such density estimation and developed a new one which takes the facts mentioned above into consideration. This work was supported by the Slovak Research and Development Agency under the contracts APVV-0827-12 and APVV-0194-10.
An Approximate Method of Estimating Soil Water Diffusivity For Different Soil Bulk Densities
NASA Astrophysics Data System (ADS)
Libardi, P. L.; Reichardt, K.; Jose, C.; Bazza, M.; Nielsen, D. R.
1982-02-01
The effect of soil bulk density on soil water diffusivity and on infiltration is studied using data from 13 soils, widely ranging in texture. It is shown for horizontal infiltration of water into initially air dry soil that although changes in the slopes of plots of the distance to the wetting front as a function of square root of infiltration time corresponding to different bulk density values actually depend on soil type, they may be considered independent. Hence a generalized exponential equation is developed which expresses the soil water diffusivity of any soil as a function of soil water content and soil bulk density, knowing only the rate at which the wetting front advances for only one value of the bulk density.
Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.
2005-01-01
Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.
Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David
2012-12-01
The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard
1994-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
Variability of footprint ridge density and its use in estimation of sex in forensic examinations.
Krishan, Kewal; Kanchan, Tanuj; Pathania, Annu; Sharma, Ruchika; DiMaggio, John A
2014-11-20
The present study deals with a comparatively new biometric parameter of footprints called footprint ridge density. The study attempts to evaluate sex-dependent variations in ridge density in different areas of the footprint and its usefulness in discriminating sex in the young adult population of north India. The sample for the study consisted of 160 young adults (121 females) from north India. The left and right footprints were taken from each subject according to the standard procedures. The footprints were analysed using a 5?mm?×?5?mm square and the ridge density was calculated in four different well-defined areas of the footprints. These were: F1 - the great toe on its proximal and medial side; F2 - the medial ball of the footprint, below the triradius (the triradius is a Y-shaped group of ridges on finger balls, palms and soles which forms the basis of ridge counting in identification); F3 - the lateral ball of the footprint, towards the most lateral part; and F4 - the heel in its central part where the maximum breadth at heel is cut by a perpendicular line drawn from the most posterior point on heel. This value represents the number of ridges in a 25?mm(2) area and reflects the ridge density value. Ridge densities analysed on different areas of footprints were compared with each other using the Friedman test for related samples. The total footprint ridge density was calculated as the sum of the ridge density in the four areas of footprints included in the study (F1?+?F2?+?F3?+?F4). The results show that the mean footprint ridge density was higher in females than males in all the designated areas of the footprints. The sex differences in footprint ridge density were observed to be statistically significant in the analysed areas of the footprint, except for the heel region of the left footprint. The total footprint ridge density was also observed to be significantly higher among females than males. A statistically significant correlation is shown in the ridge densities among most areas of both left and right sides. Based on receiver operating characteristic (ROC) curve analysis, the sexing potential of footprint ridge density was observed to be considerably higher on the right side. The sexing potential for the four areas ranged between 69.2% and 85.3% on the right side, and between 59.2% and 69.6% on the left side. ROC analysis of the total footprint ridge density shows that the sexing potential of the right and left footprint was 91.5% and 77.7% respectively. The study concludes that footprint ridge density can be utilised in the determination of sex as a supportive parameter. The findings of the study should be utilised only on the north Indian population and may not be internationally generalisable. PMID:25413487
Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J
2012-07-01
1.?We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2.?The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3.?An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4.?We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. PMID:22320218
Resolution Independent Density Estimation for Motion Planning in High-Dimensional Spaces
Kavraki, Lydia E.
- dimensional systems (greater than 10). A Geometric Near- neighbor Access Tree (GNAT) is maintained to estimate space and, given that a GNAT requires only a valid distance metric, STRIDE is largely parameter-free. Ex
NASA Astrophysics Data System (ADS)
Shangguan, Pengcheng; Al-Qadi, Imad L.; Lahouar, Samer
2014-08-01
This paper presents the application of artificial neural network (ANN) based pattern recognition to extract the density information of asphalt pavement from simulated ground penetrating radar (GPR) signals. This study is part of research efforts into the application of GPR to monitor asphalt pavement density during compaction. The main challenge is to eliminate the effect of roller-sprayed water on GPR signals during compaction and to extract density information accurately. A calibration of the excitation function was conducted to provide an accurate match between the simulated signal and the real signal. A modified electromagnetic mixing model was then used to calculate the dielectric constant of asphalt mixture with water. A large database of GPR responses was generated from pavement models having different air void contents and various surface moisture contents using finite-difference time-domain simulation. Feature extraction was performed to extract density-related features from the simulated GPR responses. Air void contents were divided into five classes representing different compaction statuses. An ANN-based pattern recognition system was trained using the extracted features as inputs and air void content classes as target outputs. Accuracy of the system was tested using test data set. Classification of air void contents using the developed algorithm is found to be highly accurate, which indicates effectiveness of this method to predict asphalt concrete density.
Modeled salt density for nuclear material estimation in the treatment of spent nuclear fuel
NASA Astrophysics Data System (ADS)
Mariani, Robert D.; Vaden, DeeEarl
2010-09-01
Spent metallic nuclear fuel is being treated in a pyrometallurgical process that includes electrorefining the uranium metal in molten eutectic LiCl-KCl as the supporting electrolyte. We report a model for determining the density of the molten salt. Material balances account for the net mass of salt and for the mass of actinides present. It was necessary to know the molten salt density, but difficult to measure. It was also decided to model the salt density for the initial treatment operations. The model assumes that volumes are additive for the ideal molten salt solution as a starting point; subsequently, a correction factor for the lanthanides and actinides was developed. After applying the correction factor, the percent difference between the net salt mass in the electrorefiner and the resulting modeled salt mass decreased from more than 4.0% to approximately 0.1%. As a result, there is no need to measure the salt density at 500 °C for inventory operations; the model for the salt density is found to be accurate.
Dafflon, Baptisite; Barrash, Warren; Cardiff, Michael A.; Johnson, Timothy C.
2011-12-15
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variabledensity transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
Thomas, Len
DECAF Density Estimation for Cetaceans from passive Acoustic Fixed sensors Len Thomas CREEM the density and distribution of cetacean (whale and dolphin) species is fundamental to understanding. However, this task is difficult because most cetacean species occur at low density and over enormous areas
NASA Astrophysics Data System (ADS)
Maeda, E.; Arevalo, J.; Carmona-Moreno, C.
2012-04-01
Despite recent advances in the development of satellite sensors for monitoring precipitation at high spatial and temporal resolutions, the assessment of rainfall climatology still relies strongly on ground-station measurements. The Global Historical Climatology Network (GHCN) is one of the most popular stations database available for the international community. Nevertheless, the spatial distribution of these stations is not always homogeneous and the record length largely varies for each station. This study aimed to evaluate how the number of years recorded in the GHCN stations and the density of the network affect the uncertainties of annual rainfall climatology estimates in Latin America. The method applied was divided in two phases. In the first phase, Monte Carlo simulations were performed to evaluate how the number of samples and the characteristics of rainfall regime affect estimates of annual average rainfall. The simulations were performed using gamma distributions with pre-defined parameters, which generated synthetic annual precipitation records. The average and dispersion of the synthetic records were then estimated through the L-moments approach and compared with the original probability distribution that was used to produce the samples. The number of records (n) used in the simulation varied from 10 to 150, reproducing the range of number of years typically found in meteorological stations. A power function, in the form RMSE= f(n) = c.na, where the coefficients were defined as a function of the rainfall statistical dispersion, was applied to fit the errors. In the second phase of the assessment, the results of the simulations were extrapolated to real records obtained by the GHCN over Latin America, creating estimates of errors associated with number of records and rainfall characteristics in each station. To generate a spatially-explicit representation of the uncertainties, the errors in each station were interpolated using the inverse distance weighting method. Furthermore, the effect of the density of stations was also considered by penalizing the interpolated errors proportionally to the station density in the site. The results showed a large discrepancy on rainfall estimate uncertainties among Latin American countries. The uncertainties varied from less than 2% in the Southeastern region of Brazil, to around 40% in regions with low stations density and short time-series at Southern Peru. Therefore, the results highlight the importance of international cooperation for climate data sharing among Latin American countries. In this context, projects aiming at improving scientific cooperation and fostering information based policy such as EUROCLIMA and RALCEA, funded by the European Commission, offer an important opportunity for reducing uncertainties on estimates of climate variables in Latin America.
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
and Percent Lipids STEVEN A. POTHOVEN* National Oceanic and Atmospheric Administration, Great Lakes impedance analysis (BIA) as a nonlethal means of predicting energy density and percent lipids for three fish, total lipids, and total dry mass for whole fish, including BIA provided only slightly better predictions
ZeroBias Locally Adaptive Density Estimators Stephan R. Sain 1 and David W. Scott 2
Scott, David W.
available in the tails. We show that in regions where the density function is convex, it is theoretically) where K h (x) = (1=h)K(x=h) and h is the smoothing parameter or bandwidth. The kernel K(\\Delta) is taken represents a significant improvement over the hisÂ togram, many authors have sought to further improve upon
ZeroBias Locally Adaptive Density Estimators Stephan R. Sain and David W. Scott 1
Scott, David W.
that in regions where the density function is convex, it is theoretically possible to find local bandwidths. The kernel K(\\Delta) is taken to be a nonnegative, symmetric function integrating to one. The kernel esti the hisÂ togram, many authors have sought to further improve upon this basic design by modifying
The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition
Thomas K. Doyle; Jonathan D. R. Houghton; Regina McDevitt; John Davenport; Graeme C. Hays
2007-01-01
Two techniques are described to calculate energy densities for the bell, gonad and oral arm tissues of three scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb-calorimetry was used, a technique that is readily available and inexpensive. However, the reliability of this technique for gelatinous material is contentious. Second, further analysis involving the more labour intensive proximate-composition analysis
GPS for estimation of TEC, electron density, velocity and scintillation in the ionosphere
C. Mitchell; P. Spencer; P. Yin; D. Pokhotelov; A. Smith
2007-01-01
Dual Frequency GPS time dealy and carrier phase observations provide a wealth of information about the ionosphere. The differential phase and time delay inform about TEC and the phase and amplitude of signals about the irregularities. TEC observations from multiple receivers can be used in a tomographic algorithm to produce maps of the spatial field of electron density. Such an
Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California
Technology Transfer Automated Retrieval System (TEKTRAN)
The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...
Baylor, R. N.; Cassak, P. A. [Department of Physics, West Virginia University, Morgantown, WV 26506 (United States); Christe, S. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Hannah, I. G.; Hudson, H. S. [School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ (United Kingdom); Krucker, Saem; Lin, R. P. [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States); Mullan, D. J.; Shay, M. A., E-mail: rbaylor@mix.wvu.edu [Department of Physics and Astronomy and Bartol Research Institute, University of Delaware, 217 Sharp Laboratory, Newark, DE 19716 (United States)
2011-07-20
We use more than 4500 microflares from the RHESSI microflare data set to estimate electron densities and volumetric filling factors of microflare loops using a cooling time analysis. We show that if the filling factor is assumed to be unity, the calculated conductive cooling times are much shorter than the observed flare decay times, which in turn are much shorter than the calculated radiative cooling times. This is likely unphysical, but the contradiction can be resolved by assuming that the radiative and conductive cooling times are comparable, which is valid when the flare loop temperature is a maximum and when external heating can be ignored. We find that resultant radiative and conductive cooling times are comparable to observed decay times, which has been used as an assumption in some previous studies. The inferred electron densities have a mean value of 10{sup 11.6} cm{sup -3} and filling factors have a mean of 10{sup -3.7}. The filling factors are lower and densities are higher than previous estimates for large flares, but are similar to those found for two microflares by Moore et al.
Technology Transfer Automated Retrieval System (TEKTRAN)
Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...
Reconstruction of diagonal elements of density matrix using maximum likelihood estimation
Z. Hradil; R. Myska
1998-05-18
The data of the experiment of Schiller et al., Phys. Rev. Lett. 77 (1996) 2933, are alternatively evaluated using the maximum likelihood estimation. The given data are fitted better than by the standard deterministic approach. Nevertheless, the data are fitted equally well by a whole family of states. Standard deterministic predictions correspond approximately to the envelope of these maximum likelihood solutions.
An Anthropo metric Estimation of Body Density and Lean Body Weight in Young Women
JACK H. W; ALBERT R. BEHNKE
D tIRING THE I'AST DECADE, a number of investigators have addressed themselves to the task of developing a simplified and widely applicable method for accurately assessing percentage body fat and lean body weight in human subjects. Bodly fat and! lean body weight can be estimated precisely throtlgh such complex laboratory methods as radliography, helium (!ilution, total bodly water, total bodly
Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
Individual movements and population density estimates for moray eels on a Caribbean coral reef
R. W. Abrams; M. W. Schein
1986-01-01
Observations of moray eel (Muraenidae) distribution made on a Caribbean coral reef are discussed in the context of long term population trends. Observations of eel distribution made using SCUBA during 1978, 1979 1980, and 1984 are compared and related to the occurrence of a hurricane in 1979. An estimate of the mean standing stock of moray eels is presented. The
Individual movements and population density estimates for moray eels on a Caribbean coral reef
R. W. Abrams; M. W. Schein
1986-01-01
Observations of moray eel (Muraenidae) distribution made on a Caribbean coral reef are discussed in the context of long term population trends. Observations of eel distribution made using SCUBA during 1978, 1979–1980, and 1984 are compared and related to the occurrence of a hurricane in 1979. An estimate of the mean standing stock of moray eels is presented. The degree
Rabinovich, J E; Gürtler, R E; Leal, J A; Feliciangeli, D
1995-01-01
We reported the use of the timed manual method, routinely employed as an indicator to the relative abundance of domestic triatomine bugs, to estimate their absolute density in houses. A team of six people collected Rhodnius prolixus Stål bugs from the walls and roofs of 14 typical palm-leaf rural houses located in Cojedes, Venezuela, spending 40 minutes searching in each house. One day after these manual collections, all the houses were demolished and the number of triatomine bugs were identified by instar and counted. Linear regression analyses of the number of R. prolixus collected over 4 man-hours and the census counts obtained by house demolition indicated that the fit of the data by instar (stage II--adult) and place of capture (roof versus palm walls versus mud walls) was satisfactory. The slopes of the regressions were interpreted as a measure of "catchability" (probability of capture). Catchability increased with developmental stage (ranging from 11.2% in stage II to 38.7% in adults), probably reflecting the increasing size and visibility of bugs as they evolved. The catchability on palm wall was higher than that for roofs or mud walls, increasing form 1.3% and 3.0% in stage II to 13.4% and 14.0% in adults, respectively. We reported, also, regression equations for converting field estimates of timed manual collections of R. prolixus into absolute density estimates. PMID:7614667
Estimating the effective density of engineered nanomaterials for in vitro dosimetry
NASA Astrophysics Data System (ADS)
Deloid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip
2014-03-01
The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro.
NASA Astrophysics Data System (ADS)
ALkhazraji, Hasan; Salih, Mohammed Z.; Zhong, Zhengye; Mhaede, Mansour; Brokmeier, Hans-Günter; Wagner, Lothar; Schell, N.
2014-08-01
Cold rolling (CR) leads to a heavy changes in the crystallographic texture and microstructure, especially crystal defects, such as dislocations, and stacking faults increase. The microstructure evolution in commercially pure titanium (cp-Ti) deformed by CR at the room temperature was determined by using the synchrotron peak profile analysis of full width at half maximum (FWHM). The computer program ANIZC has been used for the calculation of diffraction contrast factors of dislocations in elastically anisotropic hexagonal crystals. The dislocation density has a minimum value at 40 pct reduction. The increase of the dislocation density at higher deformation levels is caused by the nucleation of new generation of dislocations from the crystallite grain boundaries. The high-cycle fatigue strength (HCF) has a maximum value at 80 pct reduction and it has a minimum value at 40 pct reduction in the commercially pure titanium.
Estimating the effective density of engineered nanomaterials for in vitro dosimetry.
DeLoid, Glen; Cohen, Joel M; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip
2014-01-01
The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174
Estimating the density of intermediate size KBOs from considerations of volatile retention
NASA Astrophysics Data System (ADS)
Levi, Amit; Podolak, Morris
2011-07-01
By using a hydrodynamic atmospheric escape mechanism (Levi, A., Podolak, M. [2009]. Icarus 202, 681-693) we show how the unusually high mass density of Quaoar could have been predicted (constrained), without any knowledge of a binary companion. We suggest an explanation of the recent spectroscopic observations of Orcus and Charon [Delsanti, A., Merlin, F., Guilbert, A., Bauer, J., Yang, B., Meech, K.J., 2010. Astron. Astrophys. 520, A40; Cook, J.C., Desch, S.J., Roush, T.L., Trujillo, C.A., Geballe, T.R., 2007. Astrophys. J. 663, 1406-1419]. We present a simple relation between the detection of certain volatile ices and the body mass density and diameter. As a test case we implement the relations on the KBO 2003 AZ 84 and give constraints on its mass density. We also present a method of relating the latitude-dependence of hydrodynamic gas escape to the internal structure of a rapidly rotating body and apply it to Haumea.
New treatments of density fluctuations and recurrence times for re-estimating Zermelo’s paradox
NASA Astrophysics Data System (ADS)
Michel, Denis
What is the probability that all the gas in a box accumulates in the same half of this box? Though amusing, this question underlies the fundamental problem of density fluctuations at equilibrium, which has profound implementations in many physical fields. The currently accepted solutions are derived from the studies of Brownian motion by Smoluchowski, but they are not appropriate for the directly colliding particles of gases. Two alternative theories are proposed here using self-regulatory Bernoulli distributions, which incorporate roles for crowding and pressure in counteracting density fluctuations. A quantum of space is first introduced to develop a mechanism of matter congestion holding for high densities. In a second mechanism valid in ordinary conditions, the influence of local pressure on the location of every particle is examined using classical laws of ideal gases. This approach reveals that a negative feedback results from the reciprocal influences between individual particles and the population of particles, which strongly reduces the probability of atypical microstates. Finally, a thermodynamic quantum of time is defined to compare the recurrence times of improbable macrostates predicted through these different approaches.
Breast segmentation and density estimation in breast MRI: a fully automatic framework.
Gubern-Mérida, Albert; Kallenberg, Michiel; Mann, Ritse M; Martí, Robert; Karssemeijer, Nico
2015-01-01
Breast density measurement is an important aspect in breast cancer diagnosis as dense tissue has been related to the risk of breast cancer development. The purpose of this study is to develop a method to automatically compute breast density in breast MRI. The framework is a combination of image processing techniques to segment breast and fibroglandular tissue. Intra- and interpatient signal intensity variability is initially corrected. The breast is segmented by automatically detecting body-breast and air-breast surfaces. Subsequently, fibroglandular tissue is segmented in the breast area using expectation-maximization. A dataset of 50 cases with manual segmentations was used for evaluation. Dice similarity coefficient (DSC), total overlap, false negative fraction (FNF), and false positive fraction (FPF) are used to report similarity between automatic and manual segmentations. For breast segmentation, the proposed approach obtained DSC, total overlap, FNF, and FPF values of 0.94, 0.96, 0.04, and 0.07, respectively. For fibroglandular tissue segmentation, we obtained DSC, total overlap, FNF, and FPF values of 0.80, 0.85, 0.15, and 0.22, respectively. The method is relevant for researchers investigating breast density as a risk factor for breast cancer and all the described steps can be also applied in computer aided diagnosis systems. PMID:25561456
NASA Astrophysics Data System (ADS)
de Jesús Ochoa Domínguez, Humberto; Máynez, Leticia Ortega; Villegas, Osslan Osiris Vergara; Castillo, Nelly Gordillo; Sánchez, Vianey Guadalupe Cruz; Casas, Efrén David Gutiérrez
2011-10-01
The data obtained from a PET system tend to be noisy because of the limitations of the current instrumentation and the detector efficiency. This problem is particularly severe in images of small animals as the noise contaminates areas of interest within small organs. Therefore, denoising becomes a challenging task. In this paper, a novel wavelet-based regularization and edge preservation method is proposed to reduce such noise. To demonstrate this method, image reconstruction using a small mouse 18F NEMA phantom and a 18F mouse was performed. Investigation on the effects of the image quality was addressed for each reconstruction case. Results show that the proposed method drastically reduces the noise and preserves the image details.
Jamilis, Martín; Garelli, Fabricio; Mozumder, Md Salatul Islam; Castañeda, Teresita; De Battista, Hernán
2015-10-01
This paper addresses the estimation of the specific production rate of intracellular products and the modeling of the bioreactor volume dynamics in high cell density fed-batch reactors. In particular, a new model for the bioreactor volume is proposed, suitable to be used in high cell density cultures where large amounts of intracellular products are stored. Based on the proposed volume model, two forms of a high-order sliding mode observer are proposed. Each form corresponds to the cases with residual biomass concentration or volume measurement, respectively. The observers achieve finite time convergence and robustness to process uncertainties as the kinetic model is not required. Stability proofs for the proposed observer are given. The observer algorithm is assessed numerically and experimentally. PMID:26149912
NASA Astrophysics Data System (ADS)
Suzuki, Yukihisa; Taki, Masao
Magnetic fields around induction heating hobs are measured and evaluated with regard to the compliance with safety guidelines of human exposure. The magnetic flux density distributions are highly inhomogeneous and the maximum can exceed the reference levels of the guideline at the very proximity to the device. The induced current densities in human body exposed to these magnetic fields are estimated by numerical calculations by means of impedance method with an anatomical human model. The results indicate that induced currents are sufficiently lower than the basic restriction of the ICNIRP guideline. It is shown that the spatially peak incident field does not provide a relevant reference to compare with the reference level of guideline because it is too conservative but spatially averaged incident magnetic field provides much more relevant reference.
Power spectral density estimation for wireless fluctuation enhanced gas sensor nodes
Mingesz, Robert; Gingl, Zoltan
2014-01-01
Fluctuation enhanced sensing (FES) is a promising method to improve the selectivity and sensitivity of semiconductor and nanotechnology gas sensors. Most measurement setups include high cost signal conditioning and data acquisition units as well as intensive data processing. However, there are attempts to reduce the cost and energy consumption of the hardware and to find efficient processing methods for low cost wireless solutions. In our paper we propose highly efficient signal processing methods to analyze the power spectral density of fluctuations. These support the development of ultra-low-power intelligent fluctuation enhanced wireless sensor nodes while several further applications are also possible.
Bell, David M; Ward, Eric J; Oishi, A Christopher; Oren, Ram; Flikkema, Paul G; Clark, James S
2015-07-01
Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as canopy conductance and transpiration. To address this need, we developed a hierarchical Bayesian State-Space Canopy Conductance (StaCC) model linking canopy conductance and transpiration to tree sap flux density from a 4-year experiment in the North Carolina Piedmont, USA. Our model builds on existing ecophysiological knowledge, but explicitly incorporates uncertainty in canopy conductance, internal tree hydraulics and observation error to improve estimation of canopy conductance responses to atmospheric drought (i.e., vapor pressure deficit), soil drought (i.e., soil moisture) and above canopy light. Our statistical framework not only predicted sap flux observations well, but it also allowed us to simultaneously gap-fill missing data as we made inference on canopy processes, marking a substantial advance over traditional methods. The predicted and observed sap flux data were highly correlated (mean sensor-level Pearson correlation coefficient = 0.88). Variations in canopy conductance and transpiration associated with environmental variation across days to years were many times greater than the variation associated with model uncertainties. Because some variables, such as vapor pressure deficit and soil moisture, were correlated at the scale of days to weeks, canopy conductance responses to individual environmental variables were difficult to interpret in isolation. Still, our results highlight the importance of accounting for uncertainty in models of ecophysiological and ecosystem function where the process of interest, canopy conductance in this case, is not observed directly. The StaCC modeling framework provides a statistically coherent approach to estimating canopy conductance and transpiration and propagating estimation uncertainty into ecosystem models, paving the way for improved prediction of water and carbon uptake responses to environmental change. PMID:26063709
Estimating the effective density of engineered nanomaterials for in vitro dosimetry
DeLoid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Wang, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip
2014-01-01
The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by bench-top centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174