For comprehensive and current results, perform a real-time search at Science.gov.

1

Wavelet-based estimators of scaling behavior

Abstract: Various wavelet-based estimators of self-similarity or long-range dependence scaling exponent are studied extensively. These estimators mainlyinclude the (bi)orthogonal wavelet estimators and the Wavelet Transform Modulus Maxima (WTMM) estimator. This study focuses both onshort and long time-series. In the framework of Fractional Auto-Regressive Integrated Moving Average (FARIMA) processes, we advocatethe use of approximately adapted wavelet estimators. For these "ideal" processes,

Benjamin Audit; Emmanuel Bacry; Jean-François Muzy; Alain Arneodo

2002-01-01

2

Relay feedback and wavelet based estimation of plant model parameters

The paper presents a relay feedback and wavelet based method for the estimation of completely unknown processes for autotune purposes. From a single symmetrical relay feedback analysis a set of general expressions are presented for on-line process identification. Using these expressions the exact parameters of open loop stable and unstable first order plus time delay (FOPDT) and second order plus

S. Majhi; J. S. Sahmbi; D. P. Atherton

2001-01-01

3

Wavelet-Based Histograms for Selectivity Estimation

Query optimization is an integral part of relational database management systems. One important task in query optimization is selectivity estimation. Given a query P , we need to estimate the fraction of records in the database that satisfy P.M any commercial database systems maintain histograms to approximate the frequency dis- tribution of values in the attributes of relations. In this

Yossi Matias; Jeffrey Scott Vitter; Min Wang

1998-01-01

4

Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

NASA Astrophysics Data System (ADS)

This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

Cifter, Atilla

2011-06-01

5

A simple statistical analysis of wavelet-based multifractal spectrum estimation

The multifractal spectrum characterizes the scaling and singularity structures of signals and proves useful in numerous applications, from network traffic analysis to turbulence. Of great concern is the estimation of the spectrum from a finite data record. We derive asymptotic expressions for the bias and variance of a wavelet-based estimator for a fractional Brownian motion (fBm) process. Numerous numerical simulations

Paulo Goncalves; Rudolf Riedi; Richard Baraniuk

1998-01-01

6

Estimation of Modal Parameters Using a Wavelet-Based Approach

NASA Technical Reports Server (NTRS)

Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

Lind, Rick; Brenner, Marty; Haley, Sidney M.

1997-01-01

7

Wavelet-Based Linear-Response Time-Dependent Density-Functional Theory

Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BigDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program deMon2k for the calculation of electronic absorption spectra of N2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BigDFT than for deMon2k. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BigDFT, while all virtual orbitals are included in TD-DFT calculations in deMon2k. As a reality check, we report the x-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidaz...

Natarajan, Bhaarathi; Casida, Mark E; Deutsch, Thierry; Burchak, Olga N; Philouze, Christian; Balakirev, Maxim Y

2011-01-01

8

A new wavelet based algorithm for estimating respiratory motion rate using UWB radar

UWB signals have become attractive for their particular advantage of having narrow pulse width which makes them suitable for remote sensing of vital signals. In this paper a novel approach to estimate periodic motion rates, using ultra wide band (UWB) signals is proposed. The proposed algorithm which is based on wavelet transform is used as a non-contact tool for measurement

Mehran Baboli; Seyed Ali Ghorashi; Namdar Saniei; Alireza Ahmadian

2009-01-01

9

Wavelet-Based Parameter Estimation for Polynomial Contaminated Fractionally Differenced Processes

of estimating the param- eters for a stochastic process using a time series containing a trend component. Trend in models such as fractionally differenced (FD) processes, which exhibit slowly decaying auto- correlations in a model of polynomial trend plus FD noise. Using Daubechies wavelet filters allows for automatic

Percival, Don

10

Wavelet-based analysis and power law classification of C/NOFS high-resolution electron density data

NASA Astrophysics Data System (ADS)

This paper applies new wavelet-based analysis procedures to low Earth-orbiting satellite measurements of equatorial ionospheric structure. The analysis was applied to high-resolution data from 285 Communications/Navigation Outage Forecasting System (C/NOFS) satellite orbits sampling the postsunset period at geomagnetic equatorial latitudes. The data were acquired during a period of progressively intensifying equatorial structure. The sampled altitude range varied from 400 to 800 km. The varying scan velocity remained within 20° of the cross-field direction. Time-to-space interpolation generated uniform samples at approximately 8 m. A maximum segmentation length that supports stochastic structure characterization was identified. A two-component inverse power law model was fit to scale spectra derived from each segment together with a goodness-of-fit measure. Inverse power law parameters derived from the scale spectra were used to classify the scale spectra by type. The largest category was characterized by a single inverse power law with a mean spectral index somewhat larger than 2. No systematic departure from the inverse power law was observed to scales greater than 100 km. A small subset of the most highly disturbed passes at the lowest sampled altitudes could be categorized by two-component power law spectra with a range of break scales from less than 100 m to several kilometers. The results are discussed within the context of other analyses of in situ data and spectral characteristics used for scintillation analyses.

Rino, C. L.; Carrano, C. S.; Roddy, Patrick

2014-08-01

11

A New Wavelet Based Electronic Structure Code

We present a new wavelet based local density approximation electronic structure method for large scale systems. The new method has been developed for general boundary conditions which allows for the study of 3-D periodic systems, layered systems and surfaces. Because we are interested in large systems a question arises as to which is computationally more efficient: solving a large sparse

W. A. Shelton; W. F. Lawkins; G. M. Stocks; D. M. C. Nicholson

1998-01-01

12

Wavelet-based Evapotranspiration Forecasts

NASA Astrophysics Data System (ADS)

Providing a reliable short-term forecast of evapotranspiration (ET) could be a valuable element for improving the efficiency of irrigation water delivery systems. In the last decade, wavelet transform has become a useful technique for analyzing the frequency domain of hydrological time series. This study shows how wavelet transform can be used to access statistical properties of evapotranspiration. The objective of the research reported here is to use wavelet-based techniques to forecast ET up to 16 days ahead, which corresponds to the LANDSAT 7 overpass cycle. The properties of the ET time series, both physical and statistical, are examined in the time and frequency domains. We use the information about the energy decomposition in the wavelet domain to extract meaningful components that are used as inputs for ET forecasting models. Seasonal autoregressive integrated moving average (SARIMA) and multivariate relevance vector machine (MVRVM) models are coupled with the wavelet-based multiresolution analysis (MRA) results and used to generate short-term ET forecasts. Accuracy of the models is estimated and model robustness is evaluated using the bootstrap approach.

Bachour, R.; Maslova, I.; Ticlavilca, A. M.; McKee, M.; Walker, W.

2012-12-01

13

Density-difference estimation.

We address the problem of estimating the difference between two probability densities. A naive approach is a two-step procedure of first estimating two densities separately and then computing their difference. However, this procedure does not necessarily work well because the first step is performed without regard to the second step, and thus a small estimation error incurred in the first stage can cause a big error in the second stage. In this letter, we propose a single-shot procedure for directly estimating the density difference without separately estimating two densities. We derive a nonparametric finite-sample error bound for the proposed single-shot density-difference estimator and show that it achieves the optimal convergence rate. We then show how the proposed density-difference estimator can be used in L²-distance approximation. Finally, we experimentally demonstrate the usefulness of the proposed method in robust distribution comparison such as class-prior estimation and change-point detection. PMID:23777524

Sugiyama, Masashi; Kanamori, Takafumi; Suzuki, Taiji; du Plessis, Marthinus Christoffel; Liu, Song; Takeuchi, Ichiro

2013-10-01

14

Deconvolution by thresholding in mirror wavelet bases.

The deconvolution of signals is studied with thresholding estimators that decompose signals in an orthonormal basis and threshold the resulting coefficients. A general criterion is established to choose the orthonormal basis in order to minimize the estimation risk. Wavelet bases are highly sub-optimal to restore signals and images blurred by a low-pass filter whose transfer function vanishes at high frequencies. A new orthonormal basis called mirror wavelet basis is constructed to minimize the risk for such deconvolutions. An application to the restoration of satellite images is shown. PMID:18237922

Kalifa, Jérôme; Mallat, Stéphane; Rougé, Bernard

2003-01-01

15

Minimum complexity density estimation

The authors introduce an index of resolvability that is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length. The results on the index of resolvability demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference. The minimum complexity estimator converges to

Andrew R. Barron; Thomas M. Cover

1991-01-01

16

Conditional Density Estimation with Class Probability Estimators

Conditional Density Estimation with Class Probability Estimators Eibe Frank and Remco R. Bouckaert to quantify the uncertainty inherent in a prediction. If a conditional density estimate is available conditional density estimates using a class proba- bility estimator, where this estimator is applied

Frank, Eibe

17

Conditional Density Estimation via Least-Squares Density Ratio Estimation

781 Conditional Density Estimation via Least-Squares Density Ratio Estimation Masashi Sugiyama a novel method of con- ditional density estimation. Our basic idea is to express the conditional density in terms of the ratio of unconditional densities, and the ratio is directly estimated without going through

Sugiyama, Masashi

18

Forest Density Estimation Anupam Gupta

Forest Density Estimation Anupam Gupta , John Lafferty , Han Liu , Larry Wasserman , Min Xu School and density estimation in high dimensions, using a family of density estimators based on forest structured undirected graphical models. For density estimation, we do not assume the true distribution corresponds

Guestrin, Carlos

19

Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching

Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density p, allowing us to obtain the natural non-negative density representation (p)2. Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. PMID:18390355

Peter, Adrian M.; Rangarajan, Anand

2010-01-01

20

Nonparametric Density Estimation using Wavelets Marina Vannucci #

Nonparametric Density Estimation using Wavelets Marina Vannucci # Department of Statistics, Texas A revision September 1998 Abstract Here the problem of density estimation using wavelets is considered. Nonparametric wavelet density estimators have recently been proposed and seem to outperform classical estimators

West, Mike

21

Adaptive wavelet-based deconvolution method for remote sensing imaging.

Fourier-based deconvolution (FoD) techniques, such as modulation transfer function compensation, are commonly employed in remote sensing. However, the noise is strongly amplified by FoD and is colored, thus producing poor visual quality. We propose an adaptive wavelet-based deconvolution algorithm for remote sensing called wavelet denoise after Laplacian-regularized deconvolution (WDALRD) to overcome the colored noise and to preserve the textures of the restored image. This algorithm adaptively denoises the FoD result on a wavelet basis. The term "adaptive" means that the wavelet-based denoising procedure requires no parameter to be estimated or empirically set, and thus the inhomogeneous Laplacian prior and the Jeffreys hyperprior are proposed. Maximum a posteriori estimation based on such a prior and hyperprior leads us to an adaptive and efficient nonlinear thresholding estimator, and therefore WDALRD is computationally inexpensive and fast. Experimentally, textures and edges of the restored image are well preserved and sharp, while the homogeneous regions remain noise free, so WDALRD gives satisfactory visual quality. PMID:19696869

Zhang, Wei; Zhao, Ming; Wang, Zhile

2009-08-20

22

A new wavelet-based adaptive method for solving population balance equations

A new wavelet-based adaptive framework for solving population balance equations (PBEs) is proposed in this work. The technique is general, powerful and efficient without the need for prior assumptions about the characteristics of the processes. Because there are steeply varying number densities across a size range, a new strategy is developed to select the optimal order of resolution and the

Y Liu; I. T Cameron

2003-01-01

23

Wavelet-based ultrasound image denoising: performance analysis and comparison.

Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

2011-01-01

24

DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION

DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION ROGER KOENKER AND IVAN MIZERA Abstract. L1 based on total variation of the estimated density, its square root, and its logarithm Â and their derivatives Â in the context of univariate and bivariate density estimation, and compare the results to some

Mizera, Ivan

25

DENSITY ESTIMATION TECHNIQUES FOR GLOBAL ILLUMINATION

DENSITY ESTIMATION TECHNIQUES FOR GLOBAL ILLUMINATION A Dissertation Presented to the Faculty;DENSITY ESTIMATION TECHNIQUES FOR GLOBAL ILLUMINATION Bruce Jonathan Walter, Ph.D. Cornell University 1998 In this thesis we present the density estimation framework for computing view- independent global illumination

Keinan, Alon

26

Density-Difference Estimation Masashi Sugiyama1

Density-Difference Estimation Masashi Sugiyama1 Takafumi Kanamori2 Taiji Suzuki3 Marthinus-step procedure of first estimating two densities separately and then computing their difference. However the density difference without separately esti- mating two densities. We derive a non-parametric finite

Sugiyama, Masashi

27

Wavelet-based acoustic recognition of aircraft

We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

Dress, W.B.; Kercel, S.W.

1994-09-01

28

Kernel Density Estimation An Introduction

If we have a parametric, generative model of the PDF, we can estimate its parameters from samples and their Estimation 7 / 29 Without a generative model, all we can use to estimate is our available sample . Uses: Â· Faithfully model arbitrary distributions from finite samples Â· Intuitive presentation and exploration

Piater, Justus H.

29

Wavelet-based fingerprint image retrieval

NASA Astrophysics Data System (ADS)

This paper presents a novel approach for personal identification based on a wavelet-based fingerprint retrieval system which encompasses three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing. We propose the use of different types of Wavelets for representing and describing the textural information presented in fingerprint images in a compact way. For that purpose, the feature vectors used to characterize the fingerprints are obtained by computing the mean and the standard deviation of the decomposed images in the wavelet domain. These feature vectors are used both to retrieve the most similar fingerprints, given a query image, and their indexation is used to reduce the search spaces of candidate images. The different types of Wavelets used in our study include: Gabor wavelets, tree-structured wavelet decomposition using both orthogonal and bi-orthogonal filter banks, as well as the steerable wavelets. To evaluate the retrieval accuracy of the proposed approach, a total number of eight different data sets were considered. We also took into account different combinations of the above wavelets with six similarity measures. The results show that the Gabor wavelets combined with the Square Chord similarity measure achieves the best retrieval effectiveness.

Montoya Zegarra, Javier A.; Leite, Neucimar J.; da Silva Torres, Ricardo

2009-05-01

30

Construction of fractional spline wavelet bases

NASA Astrophysics Data System (ADS)

We extend Schoenberg's B-splines to all fractional degrees (alpha) > -1/2. These splines are constructed using linear combinations of the integer shifts of the power functions x(alpha ) (one-sided) or x(alpha )* (symmetric); in each case, they are (alpha) -Hoelder continuous for (alpha) > 0. They satisfy most of the properties of the traditional B-splines; in particular, the Biesz basis condition and the two-scale relation, which makes them suitable for the construction of new families of wavelet bases. What is especially interesting from a wavelet perspective is that the fractional B-splines have a fractional order of approximately ((alpha) + 1), while they reproduce the polynomials of degree [(alpha) ]. We show how they yield continuous-order generalization of the orthogonal Battle- Lemarie wavelets and of the semi-orthogonal B-spline wavelets. As (alpha) increases, these latter wavelets tend to be optimally localized in time and frequency in the sense specified by the uncertainty principle. The corresponding analysis wavelets also behave like fractional differentiators; they may therefore be used to whiten fractional Brownian motion processes.

Unser, Michael A.; Blu, Thierry

1999-10-01

31

Bayesian Density Estimation and Inference Using Mixtures

We describe and illustrate Bayesian inference in models for density estimation using mixturesof Dirichlet processes. These models provide natural settings for density estimation,and are exemplified by special cases where data are modelled as a sample from mixtures ofnormal distributions. Efficient simulation methods are used to approximate various prior,posterior and predictive distributions. This allows for direct inference on a variety of

Michael D. Escobar; Mike West

1994-01-01

32

Application of wavelet-based denoising techniques to remote sensing very low frequency signals

NASA Astrophysics Data System (ADS)

In this paper, we apply wavelet-based denoising techniques to experimental remote sensing very low frequency (VLF) signals obtained from the Holographic Array for Ionospheric/Lightning research system and the Elazig VLF receiver system. The wavelet-based denoising techniques are tested by soft, hard, hyperbolic and nonnegative garrote wavelet thresholding with the threshold selection rule based on Stein's unbiased estimate of risk, the fixed form threshold, the mixed threshold selection rule and the minimax-performance threshold selection rule. The aim of this study is to find out the direct (early/fast) and indirect (lightning-induced electron precipitation) effects of lightning in noisy VLF transmitter signals without discomposing the nature of signal. The appropriate results are obtained by fixed form threshold selection rule with soft thresholding using Symlet wavelet family.

Güzel, Esat; Cany?Lmaz, Murat; Türk, Mustafa

2011-04-01

33

Wavelet-based adaptive denoising and baseline correction for MALDI TOF MS.

Proteomic profiling by MALDI TOF mass spectrometry (MS) is an effective method for identifying biomarkers from human serum/plasma, but the process is complicated by the presence of noise in the spectra. In MALDI TOF MS, the major noise source is chemical noise, which is defined as the interference from matrix material and its clusters. Because chemical noise is nonstationary and nonwhite, wavelet-based denoising is more effective than conventional noise reduction schemes based on Fourier analysis. However, current wavelet-based denoising methods for mass spectrometry do not fully consider the characteristics of chemical noise. In this article, we propose new wavelet-based high-frequency noise reduction and baseline correction methods that were designed based on the discrete stationary wavelet transform. The high-frequency noise reduction algorithm adaptively estimates the time-varying threshold for each frequency subband from multiple realizations of chemical noise and removes noise from mass spectra of samples using the estimated thresholds. The baseline correction algorithm computes the monotonically decreasing baseline in the highest approximation of the wavelet domain. The experimental results demonstrate that our algorithms effectively remove artifacts in mass spectra that are due to chemical noise while preserving informative features as compared to commonly used denoising methods. PMID:20455751

Shin, Hyunjin; Sampat, Mehul P; Koomen, John M; Markey, Mia K

2010-06-01

34

Wavelet-based SAR speckle reduction and image compression

NASA Astrophysics Data System (ADS)

This paper evaluates the performance of the recently published wavelet-based algorithm for speckle reduction of SAR images. The original algorithm, based on the theory of wavelet thresholding due to Donoho and Johnstone, has been shown to improve speckle statistics. In this paper, we give more extensive results based on tests performed at Lincoln Laboratory (LL). The LL benchmarks show that the SAR imagery is significantly enhanced perceptually. Although the wavelet processed data results in an increase in the number of natural clutter false alarms, an appropriately modified CFAR detector (i.e., by clamping the estimated clutter standard deviation) eliminates the extra false alarms. The paper also gives preliminary results on the performance of the new and improved wavelet denoising algorithm based on the shift invariant wavelet transform. By thresholding the shift invariant discrete wavelet transform we can further reduce speckle to achieve a perceptually superior SAR image with ground truth information significantly enhanced. Preliminary results on the speckle statistics of this new algorithm is improved over the classical wavelet denoising algorithm. Finally, we show that the classical denoising algorithm as proposed by Donoho and Johnstone and applied to SAR has the added benefit of achieving about 3:1 compression with essentially no loss in image fidelity.

Odegard, Jan E.; Guo, Haitao; Lang, Markus; Burrus, C. Sidney; Wells, Raymond O., Jr.; Novak, Leslie M.; Hiett, Margarita

1995-06-01

35

A wavelet based investigation of long memory in stock returns

NASA Astrophysics Data System (ADS)

Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.

Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.

2012-04-01

36

Wavelet-based adaptive multiresolution computation of viscous reactive flows

NASA Astrophysics Data System (ADS)

We present a wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well-established wavelet theory. This allows us to provide error estimates of the solution resulting from the use of an appropriate threshold criteria. The algorithm is applied to a number of test problems as well as to the study of the ignition-delay and subsequent viscous detonation of a H2:O2:Ar mixture in a shock tube. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost. The algorithm is compared with classic ENO and TVD schemes. It is shown that the algorithm, besides being significantly more efficient in terms of computational cost, is free from many numerical difficulties associated with those schemes.

Rastigejev, Yevgenii A.; Paolucci, Samuel

2006-11-01

37

Direct Density Ratio Estimation with Dimensionality Reduction Masashi Sugiyama

Direct Density Ratio Estimation with Dimensionality Reduction Masashi Sugiyama , Satoshi Hara for directly estimating the ratio of two probability density functions without going through density estimation such as non-stationarity adaptation, outlier detection, conditional density estima- tion, feature selection

Sugiyama, Masashi

38

ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

39

Towards Kernel Density Estimation over Streaming Data

A variety of real-world applications heavily relies on the analysis of transient data streams. Due to the rigid process- ing requirements of data streams, common analysis tech- niques as known from data mining are not applicable. A fundamental building block of many data mining and analysis approaches is density estimation. It provides a well-defined estimation of a continuous data distribution,

Christoph Heinz; Bernhard Seeger

2006-01-01

40

Quantum statistical inference for density estimation

A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inverse problems; the penalty function is a convex information divergence enforcing global smoothing toward default models, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing may be enforced by constraints on the expectation values of differential operators. Although the hyperparameters, covariance, and linear response to perturbations can be estimated by a variety of statistical methods, we develop the Bayesian interpretation. The linear response of the MAP estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood. The method is demonstrated on standard data sets.

Silver, R.N.; Martz, H.F.; Wallstrom, T.

1993-11-01

41

Wavelet-based Markov models for clutter characterization in IR and SAR images

NASA Astrophysics Data System (ADS)

This paper presents wavelet-based methods for characterizing clutter in IR and SAR images. With our methods, the operating parameters of automatic target recognition (ATR) systems can automatically adapt to local clutter conditions. Structured clutter, which can confuse ATR systems, possesses correlation across scale in the wavelet domain. We model this correlation using wavelet-domain hidden Markov trees, for which efficient parameter estimation algorithms exist. Based on these models, we develop analytical methods for estimating the false alarm rates of mean-squared-error classifiers. These methods are equally useful for determining threshold levels for constant false alarm rate detectors.

Stanford, Derek; Pitton, James W.; Goldschneider, Jill R.

2000-04-01

42

Estimating and Interpreting Probability Density Functions

NSDL National Science Digital Library

This 294-page document from the Bank for International Settlements stems from the Estimating and Interpreting Probability Density Functions workshop held on June 14, 1999. The conference proceedings, which may be downloaded as a complete document or by chapter, are divided into two sections: "Estimation Techniques" and "Applications and Economic Interpretation." Both contain papers presented at the conference. Also included are a list of the program participants with their affiliations and email addresses, a forward, and background notes.

43

Density Estimation and Smoothing based on Regularised Optimal Transport

Density Estimation and Smoothing based on Regularised Optimal Transport Martin Burger Marzena nonparametric approach for estimating and smoothing densities based on a variational regularisation method model for special regularisation functionals yields a natural method for estimating densities

MÃ¼nster, WestfÃ¤lische Wilhelms-UniversitÃ¤t

44

Density estimation by maximum quantum entropy

A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets.

Silver, R.N.; Wallstrom, T.; Martz, H.F.

1993-11-01

45

3D Wavelet-Based Filter and Method

A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

Moss, William C. (San Mateo, CA); Haase, Sebastian (San Francisco, CA); Sedat, John W. (San Francisco, CA)

2008-08-12

46

Enhancing hyperspectral data throughput utilizing wavelet-based fingerprints

NASA Astrophysics Data System (ADS)

Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, we investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (1) the computational expense of the new method is compared with the computational costs of the current method and (2) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.

Bruce, Lori M.; Li, Jiang

1999-12-01

47

Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints

Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.

I. W. Ginsberg

1999-09-01

48

Edge-Preserving Wavelet-Based Multisensor Image Fusion Approach

Edge-Preserving Wavelet-Based Multisensor Image Fusion Approach Lahouari Ghouti, Ahmed Bouridane interesting potential in the fusion of images obtained from possibly different types of sensors that need propose an image fusion scheme where image edges, characterized by wavelet maxima, are considered

Ghouti, Lahouari

49

Wavelet-Based Time Series Analysis of Circadian Rhythms

Analysis of circadian oscillations that exhibit variability in period or amplitude can be accomplished through wavelet transforms. Wavelet-based methods can also be used quite effectively to remove trend and noise from time series and to assess the strength of rhythms in different frequency bands, for example, ultradian versus circadian components in an activity record. In this article, we describe how

Tanya L. Leise; Mary E. Harrington

2011-01-01

50

Wavelet-Based Image Fusion by Adaptive Decomposition

Image fusion is a process to combine information from multiple images of the same scene. The result of image fusion will be a new image which is more suitable for human and machine perception or further tasks of image processing such as image segmentation, feature extraction and object recognition. In this paper, a new wavelet-based approach for multi-resolution image fusion

Yao-hong Tsai; Yen-han Lee

2008-01-01

51

Wavelet-based Feature Extraction for Fingerprint Image Retrieval

This paper presents a novel approach to fingerprint retrieval for personal identi- fication by joining three image retrieval tasks, namely, feature extraction, similarity measurement, and feature indexing, into a wavelet-based fingerprint retrieval system. We propose the use of different types of Wavelets for representing and describing the textural information present in fingerprint images. For that purposes, the feature vectors used

Javier A. Montoya; ZegarraNeucimar J. Leite; Ricardo da S. Torres

52

Template Learning from Atomic Representations: A Wavelet-based

matching process (in both template learning can classi#12;cation) by giving more weight to signi#12;cant100 Template Learning from Atomic Representations: A Wavelet-based Approach to Pattern Analysis to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent

Nowak, Robert

53

Template Learning from Atomic Representations: A Wavelet-based

is advantageous because it facilitates the pattern matching process (in both template learning and classi#12100 Template Learning from Atomic Representations: A Wavelet-based Approach to Pattern Analysis to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent

Scott, Clayton

54

Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures

- tations Xt recorded at dam wall (temperature is regarded as important driver for other water qualityWavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures Don Percival Applied;Background: I Â· Queensland Bulk Water Supply Authority (Seqwater) manages catchments, water storages

Percival, Don

55

Wavelet-based analysis of blood pressure dynamics in rats

NASA Astrophysics Data System (ADS)

Using a wavelet-based approach, we study stress-induced reactions in the blood pressure dynamics in rats. Further, we consider how the level of the nitric oxide (NO) influences the heart rate variability. Clear distinctions for male and female rats are reported.

Pavlov, A. N.; Anisimov, A. A.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznecova, A. S.; Matasova, E. G.

2009-02-01

56

Wavelet-based detection of clods on a soil surface

NASA Astrophysics Data System (ADS)

One of the aims of the tillage operation is to produce a specific range of clod sizes, suitable for plant emergence. Due to its cloddy structure, a tilled soil surface has its own roughness, which is connected also with soil water content and erosion phenomena. The comprehension and modeling of surface runoff and erosion require that the micro-topography of the soil surface is well estimated. Therefore, the present paper focuses on the soil surface analysis and characterization. An original method consisting in detecting the individual clods or large aggregates on a 3D digital elevation model (DEM) of the soil surface is introduced. A multiresolution decomposition of the surface is performed by wavelet transform. Then a supervised local maxima extraction is performed on the different sub surfaces and a last process makes the validation of the extractions and the merging of the different scales. The method of detection was evaluated with the help of a soil scientist on a controlled surface made in the laboratory as well as on real seedbed and ploughed surfaces, made by tillage operations in an agricultural field. The identifications of the clods are in good agreement, with an overall sensitivity of 84% and a specificity of 94%. The false positive or false negative detections may have several causes. Some very nearby clods may have been smoothed together in the approximation process. Other clods may be embedded into another peace of the surface relief such as another bigger clod or a part of the furrow. At last, the low levels of decomposition are dependent on the resolution and the measurement noise of the DEM. Therefore, some borders of clods may be difficult to determine. The wavelet-based detection method seems to be suitable for soil surfaces described by 2 or 3 levels of approximation such as seedbeds.

Vannier, E.; Ciarletti, V.; Darboux, F.

2009-11-01

57

Estimating density of Florida Key deer

Florida Key deer (Odocoileus virginianus clavium) were listed as endangered by the U.S. Fish and Wildlife Service (USFWS) in 1967. A variety of survey methods have been used in estimating deer density and/or changes in population trends...

Roberts, Clay Walton

2006-08-16

58

Multiscale Poisson Intensity and Density Estimation

in near minimax optimal convergence rates. For piecewise analytic signals, in particular, the error in this paper offer near minimax con- vergence rates for broad classes of densities and intensities of this estimator converges at nearly the parametric rate. These methods can be further refined in two dimensions

Nowak, Robert

59

Sampling, Density Estimation and Spatial Relationships

NSDL National Science Digital Library

This resource serves as a tool used for instructing a laboratory exercise in ecology. Students obtain hands-on experience using techniques such as, mark-recapture and density estimation and organisms such as, zooplankton and fathead minnows. This exercise is suitable for general ecology and introductory biology courses.

Maggie Haag (University of Alberta;); William M. Tonn (;)

1998-01-01

60

DENSITY ESTIMATION FOR PROJECTED EXOPLANET QUANTITIES

Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of 'projected' mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth {sigma}. We calibrate the dependence of {sigma} on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n{sup -log2}. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements-particularly on the minimum sample size n-for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

Brown, Robert A., E-mail: rbrown@stsci.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

2011-05-20

61

Stochastic model for estimation of environmental density

The environment density has been defined as the value of a habitat expressing its unfavorableness for settling of an individual which has a strong anti-social tendency to other individuals in an environment. Morisita studied anti-social behavior of ant-lions (Glemuroides japanicus) and provided a recurrence relation without an explicit solution for the probability distribution of individuals settling in each of two habitats in terms of the environmental densities and the numbers of individuals introduced. In this paper the recurrence relation is explicitly solved; certain interesting properties of the distribution are discussed including the estimation of the parameters. 4 references, 1 table.

Janardan, K.G.; Uppuluri, V.R.R.

1984-01-01

62

Quantum Computation Based Probability Density Function Estimation

Signal processing techniques will lean on blind methods in the near future, where no redundant, resource allocating information will be transmitted through the channel. To achieve a proper decision, however, it is essential to know at least the probability density function (pdf), which to estimate is classically a time consumption and/or less accurate hard task, that may make decisions to fail. This paper describes the design of a quantum assisted pdf estimation method also by an example, which promises to achieve the exact pdf by proper setting of parameters in a very fast way.

Ferenc Balázs; Sándor Imre

2004-09-06

63

Snow Density Estimation using Polarimitric ASAR Data

Remote sensing of radar polarimety has great potential to determine the extent and properties of snow cover. Availability of spaceborne sensor dual polarimetric C-band data of ENVISAT-ASAR can enhance the accuracy in measurement of snow physical parameters as compared to single fixed polarization data measurement. This study shows that the capability of C-band SAR data for estimating dry snow density

Gulab Singh; Gopalan Venkataraman

2009-01-01

64

Bird population density estimated from acoustic signals

Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

Dawson, D.K.; Efford, M.G.

2009-01-01

65

Wavelet-based image fusion and quality assessment

Recent developments in satellite and sensor technologies have provided high-resolution satellite images. Image fusion techniques can improve the quality, and increase the application of these data. This paper addresses two issues in image fusion (a) the image fusion method and (b) corresponding quality assessment.Firstly, a multi-band wavelet-based image fusion method is presented, which is a further development of the two-band

Wenzhong Shi; Changqing Zhu; Yan Tian; Janet Nichol

2005-01-01

66

ECG Feature Extraction Using Wavelet Based Derivative Approach

\\u000a Many real-time QRS detection algorithms have been proposed in the literature. However these algorithms usually either exhibit\\u000a too long a response time or lack robustness. An algorithm has been developed which offers a balance between these two traits,\\u000a with a very low response time yet with performance comparable to the other algorithms. The wavelet based derivative approach\\u000a achieved better detection.

K. T. Talele

67

Wavelet-based statistical signal processing using hidden Markov models

Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are

Matthew S. Crouse; Robert D. Nowak; Richard G. Baraniuk

1998-01-01

68

Density estimation by wavelet thresholding David L. Donoho1,

0 = 2). Key Words and Phrases: Minimax Estimation, Adaptive Estimation, Density Estima- tion are contaminated by noise. This paper applies these heuristics in the context of probability density estimation: estimate a probability density function f(x) on the basis of X1;:::;Xn, independent and identically

Donoho, David

69

Lower crustal density estimation using the density-slowness relationship: a preliminary study

of crustal seismic structure models:the Wind River Mountain, Ivrea, and Christensen's and Mooney's [1995] average-crust model. The densities estimated using the density slowness method were then compared to the densities estimated by other methods...

Jones, Gary Wayne

2012-06-07

70

Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results. PMID:22887117

Paul, Sabyasachi; Sarkar, P K

2013-04-01

71

Traffic characterization and modeling of wavelet-based VBR encoded video

Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

1997-07-01

72

Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

2005-04-11

73

KERNEL ESTIMATION OF DENSITY LEVEL SETS Benot CADRE1

corresponding to a fixed probability for the law induced by f. Key-words : Kernel estimate, Density level setsKERNEL ESTIMATION OF DENSITY LEVEL SETS BenoÃ®t CADRE1 UMR CNRS 5149, Equipe de ProbabilitÃ©s et. Let f be a multivariate density and fn be a kernel estimate of f drawn from the n-sample X1

Cadre, BenoÃ®t

74

DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda

DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda Malik Magdon Abstract In this paper we consider two important topics: density estimation and random variate generation. First, we develop two new methods for density estimation, a stochastic method and a related

Magdon-Ismail, Malik

75

Wavelet-based spectral analysis of 1\\/f processes

The authors attempt to show how and why a time-scale-based spectral estimation naturally suits the nature of 1\\/f processes, characterized by a power spectral density proportional to mod nu mod \\/sup - alpha \\/. They show that a time-scale approach allows an unbiased estimation of the spectral exponent alpha and interpret this result in terms of matched tilings of the

P. Abry; P. Goncalves; P. Flandrin

1993-01-01

76

A Sparse Kernel Density Estimation Algorithm using Forward Constrained Regression

estimators with comparable accuracy to that of the classical Parzen window estimate. Key words: cross validation, jackknife parameter estimator, Parzen window, probability density function, sparse modelling. 1 is to estimate the probability density function (pdf) from observed data samples [1Â4]. A general and powerful

Chen, Sheng

77

Wavelet-based moment invariants for pattern recognition

NASA Astrophysics Data System (ADS)

Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

Chen, Guangyi; Xie, Wenfang

2011-07-01

78

Estimates of lightning ground flash density using optical transient density

The NASA optical transient detector (OTD) project has recently concluded. Several changes to data processing have improved the agreement between OTD values and ground flash density (GFD) in South America while preserving agreement with strong ground flash density trends observed in North America, South Africa and other regions. A revised relationship between OTD and GFD values is recommended.

William A. Chisholm

2003-01-01

79

Mammographic Density Estimation with Automated Volumetric Breast Density Measurement

Objective To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. Materials and Methods In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. Results The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p = 0.001 to 0.015). Conclusion There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density. PMID:24843235

Ko, Su Yeon; Kim, Eun-Kyung; Kim, Min Jung

2014-01-01

80

A wavelet based technique for suppression of EMG noise and motion artifact in ambulatory ECG.

A wavelet-based denoising technique is investigated for suppressing EMG noise and motion artifact in ambulatory ECG. EMG noise is reduced by thresholding the wavelet coefficients using an improved thresholding function combining the features of hard and soft thresholding. Motion artifact is reduced by limiting the wavelet coefficients. Thresholds for both the denoising steps are estimated using the statistics of the noisy signal. Denoising of simulated noisy ECG signals resulted in an average SNR improvement of 11.4 dB, and its application on ambulatory ECG recordings resulted in L(2) norm and max-min based improvement indices close to one. It significantly improved R-peak detection in both the cases. PMID:22255971

Mithun, P; Pandey, Prem C; Sebastian, Toney; Mishra, Prashant; Pandey, Vinod K

2011-01-01

81

Remarks on Some Nonparametric Estimates of a Density Function

This note discusses some aspects of the estimation of the density function of a univariate probability distribution. All estimates of the density function satisfying relatively mild conditions are shown to be biased. The asymptotic mean square error of a particular class of estimates is evaluated.

Murray Rosenblatt

1956-01-01

82

A Study on Passive Crowd Density Estimation using Wireless Sensors

For automatic monitoring systems and user oriented services, crowd density estimation becomes an important research topic. There are existing methods like the camera based crowd density estimation or the RFID based counting systems. However, these methods need deployment cost and enough spaces to set the sensors. In this paper, we therefore present a passive estimation system that could be easily

M. Nakatsuka; H. Iwatani; Jiro Katto

2008-01-01

83

Wavelet-based multifractal analysis of laser biopsy imagery

NASA Astrophysics Data System (ADS)

In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.

Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima

2012-03-01

84

ESTIMATING ABUNDANCE AND DENSITY: ADDITIONAL METHODS

. These methods were first developed in the 1940s for wildlife and fisheries management to get estimates populations (Ricker 1975, Seber 1982) but some are of general interest because they can be applied to wildlife that population size could be estimated from field data on the change in sex ratio during a hunting season

Krebs, Charles J.

85

Fuzzy histograms and density estimation Kevin LOQUIN1

Fuzzy histograms and density estimation Kevin LOQUIN1 and Olivier STRAUSS2 LIRMM - 161 rue Ada on the estimated density. The apriorism needed to set those values makes it a tool whose robustness and reliability authors that replacing the binary partition by a fuzzy partition will reduce the effect of arbitrari- ness

Paris-Sud XI, UniversitÃ© de

86

Risk Bounds for Mixture Density Estimation

Research, Inc., Sony MOU, Sumitomo Metal Industries, Toyota Motor Corporation, WatchVision Co., Ltd two distributions is defined as D(f g) = f(x) log f(x) g(x) dx = IE log f g . The expectation here is assumed to be with respect to x, which comes from a distribution with the density f(x). Consider

87

Relative Density-Ratio Estimation for Robust Distribution Comparison

Relative Density-Ratio Estimation for Robust Distribution Comparison Makoto Yamada Tokyo Institute approximation of density-ratios without go- ing through separate approximation of numerator and denominator densities have been successfully applied to machine learning tasks that involve distribution com- parison

Sugiyama, Masashi

88

Density Ratio Estimation: A New Versatile Tool for Machine Learning

Density Ratio Estimation: A New Versatile Tool for Machine Learning Masashi Sugiyama Department based on the ratio of prob- ability densities has been proposed recently and gathers a great deal of attention in the machine learning and data mining communities [1Â17]. This density ratio framework includes

Sugiyama, Masashi

89

Information geometry of density matrices and state estimation

Given a pure state vector |x> and a density matrix rho, the function p(x|rho)= defines a probability density on the space of pure states parameterised by density matrices. The associated Fisher-Rao information measure is used to define a unitary invariant Riemannian metric on the space of density matrices. An alternative derivation of the metric, based on square-root density matrices and trace norms, is provided. This is applied to the problem of quantum-state estimation. In the simplest case of unitary parameter estimation, new higher-order corrections to the uncertainty relations, applicable to general mixed states, are derived.

Dorje C. Brody

2010-09-06

90

Nonparametric estimation of plant density by the distance method

A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

Patil, S.A.; Burnham, K.P.; Kovner, J.L.

1979-01-01

91

PARTIAL LIKELIHOOD METHODS FOR PROBABILITY DENSITY ESTIMATION *

for learn- ing on the PL cost, the equivalence of likelihood maximization and relative entropy minimization for learning/estimating the optimal model parameters by P L maximization. We give examples to illustrate and marginal likelihood. Since it allows for inclusion of dependent observations, missing data and se- quential

Adali, Tulay

92

ADAPTIVE DENSITY ESTIMATION WITH MASSIVE DATA SETS

over time, and may not be stationary.) MDS provides the raw informaÂ tion to give us exactly what we in the parameter space. The method of moÂ ments, while not as fully developed, often involves solving equations to the frequency polyÂ gon to the averaged shifted histogram to the WARP class of estimators, a series

Scott, David W.

93

Density estimates based on point processes are often restrained to regions with irregular boundaries or holes. We propose a density estimator, the lattice-based density estimator, which produces reasonable density estimates under these circumstances. The estimation process starts with overlaying the region with nodes, linking these together in a lattice and then computing the density of random walks of length k

Ronald P. Barry; Julie McIntyre

2010-01-01

94

Density estimates based on point processes are often restrained to regions with irregular boundaries or holes. We propose a density estimator, the lattice-based density estimator, which produces reasonable density estimates under these circumstances. The estimation process starts with overlaying the region with nodes, linking these together in a lattice and then computing the density of random walks of length k

Ronald P. Barry; Julie McIntyre

2011-01-01

95

Using specially designed exponential families for density estimation

We wish to estimate the probability density $g(y)$ that produced an observed random sample of vectors $y_1, y_2, \\\\dots, y_n$. Estimates of $g(y)$ are traditionally constructed in two quite different ways: by maximum likelihood fitting within some parametric family such as the normal or by nonparametric methods such as kernel density estimation. These two methods can be combined by putting

Bradley Efron; Robert Tibshirani

1996-01-01

96

Wood density for estimating forest biomass in Brazilian Amazonia

Reliable estimates of the biomass of Amazonian forests are needed for calculations of greenhouse gas emissions from deforestation. Interpretation of forest volume data for the region is the most practical means of obtaining representative biomass estimates. The density of the wood used in converting volume data to biomass is a key factor affecting estimates of biomass and of emissions. Interpreting

Philip M. Fearnside

1997-01-01

97

Neutral wind estimation from 4-D ionospheric electron density images

We develop a new inversion algorithm for Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE method uses four-dimensional images of global electron density to estimate the field-aligned neutral wind ionospheric driver when direct measurement is not available. We begin with a model of the electron continuity equation that includes production and loss rate estimates, as well as E

S. Datta-Barua; G. S. Bust; G. Crowley; N. Curtis

2009-01-01

98

Wavelet-based image fusion and quality assessment

NASA Astrophysics Data System (ADS)

Recent developments in satellite and sensor technologies have provided high-resolution satellite images. Image fusion techniques can improve the quality, and increase the application of these data. This paper addresses two issues in image fusion (a) the image fusion method and (b) corresponding quality assessment. Firstly, a multi-band wavelet-based image fusion method is presented, which is a further development of the two-band wavelet transformation. This fusion method is then applied to a case study to demonstrate its performance in image fusion. Secondly, quality assessment for fused images is discussed. The objectives of image fusion include enhancing the visibility of the image and improving the spatial resolution and the spectral information of the original images. For assessing quality of an image after fusion, we define the aspects to be assessed initially. These include, for instance, spatial and spectral resolution, quantity of information, visibility, contrast, or details of features of interest. Quality assessment is application dependant; different applications may require different aspects of image quality. Based on this analysis, a set of qualities is classified and analyzed. These sets of qualities include (a) average grey value, for representing intensity of an image, (b) standard deviation, information entropy, profile intensity curve for assessing details of fused images, and (c) bias and correlation coefficient for measuring distortion between the original image and fused image in terms of spectral information.

Shi, Wenzhong; Zhu, ChangQing; Tian, Yan; Nichol, Janet

2005-03-01

99

Non-iterative wavelet-based deconvolution for sparse aperturesystem

NASA Astrophysics Data System (ADS)

Optical sparse aperture imaging is a promising technology to obtain high resolution but with a significant reduction in size and weight by minimizing the total light collection area. However, with the decreasing of collection area, its OTF is also greatly attenuated, and thus the directly imaging quality of sparse aperture system is very poor. In this paper, we focus on the post-processing methods for sparse aperture systems, and propose a non-iterative wavelet-based deconvolution algorithm. The algorithm is performed by adaptively denoising the Fourier-based deconvolution results on the wavelet basis. We set up a Golay-3 sparse-aperture imaging system, where the imaging and deconvolution experiments of the natural scenes are performed. The experiments demonstrate that the proposed method has greatly improved the imaging quality of Golay-3 sparse-aperture system, and produce satisfactory visual quality. Furthermore, our experimental results also indicate that the sparse aperture system has the potential to reach higher resolution with the help of better post-processing deconvolution techniques.

Xu, Wenhai; Zhao, Ming; Li, Hongshu

2013-05-01

100

Wavelet-based embedded zerotree extension to color coding

NASA Astrophysics Data System (ADS)

Recently, a new image compression algorithm was developed which employs wavelet transform and a simple binary linear quantization scheme with an embedded coding technique to perform data compaction. This new family of coder, Embedded Zerotree Wavelet (EZW), provides a better compression performance than the current JPEG coding standard for low bit rates. Since EZW coding algorithm emerged, all of the published coding results related to this coding technique are on monochrome images. In this paper the author has enhanced the original coding algorithm to yield a better compression ratio, and has extended the wavelet-based zerotree coding to color images. Color imagery is often represented by several components, such as RGB, in which each component is generally processed separately. With color coding, each component could be compressed individually in the same manner as a monochrome image, therefore requiring a threefold increase in processing time. Most image coding standards employ de-correlated components, such as YIQ or Y, CB, CR and subsampling of the 'chroma' components, such coding technique is employed here. Results of the coding, including reconstructed images and coding performance, will be presented.

Franques, Victoria T.

1998-03-01

101

Wavelet-based laser-induced ultrasonic inspection in pipes

NASA Astrophysics Data System (ADS)

The feasibility of detecting localized defects in tubing using Wavelet based laser-induced ultrasonic-guided waves as an inspection method is examined. Ultrasonic guided waves initiated and propagating in hollow cylinders (pipes and/or tubes) are studied as an alternative, robust nondestructive in situ inspection method. Contrary to other traditional methods for pipe inspection, in which contact transducers (electromagnetic, piezoelectric) and/or coupling media (submersion liquids) are used, this method is characterized by its non-contact nature. This characteristic is particularly important in applications involving Nondestructive Evaluation (NDE) of materials because the signal being detected corresponds only to the induced wave. Cylindrical guided waves are generated using a Q-switched Nd:YAG laser and a Fiber Tip Interferometry (FTI) system is used to acquire the waves. Guided wave experimental techniques are developed for the measurement of phase velocities to determine elastic properties of the material and the location and geometry of flaws including inclusions, voids, and cracks in hollow cylinders. As compared to the traditional bulk wave methods, the use of guided waves offers several important potential advantages. Some of which includes better inspection efficiency, the applicability to in-situ tube inspection, and fewer evaluation fluctuations with increased reliability.

Baltazar-López, Martín E.; Suh, Steve; Chona, Ravinder; Burger, Christian P.

2006-02-01

102

NIR and mass spectra classification: Bayesian methods for wavelet-based feature selection

NIR and mass spectra classification: Bayesian methods for wavelet-based feature selection Marina variable selection; Discrimination; Multinomial probit models; NIR spectra; Proteomic data; Wavelet that involve functional predictors, specifically spectral data. One of our practical contexts involves

Vannucci, Marina

103

Electric power transient disturbance classification using wavelet-based hidden Markov models

We utilize wavelet-based hidden Markov models (HMM) to classify electric power transient disturbances associated with degradation of power quality. Since the wavelet transform extracts power transient disturbance characteristics very well, this wavelet-based HMM classifier illustrates high classification correctness rates. The power transient disturbance is decomposed into multi-resolution wavelet domains, and the wavelet coefficients are modeled by a HMM. Based on

Jaehak Chung; E. J. Powers; W. Mack Grady; Sid C. Bhatt

2000-01-01

104

Wavelet-based representations for the 1\\/f family of fractal processes

It is demonstrated that 1\\/f fractal processes are, in a broad sense, optimally represented in terms of orthonormal wavelet bases. Specifically, via a useful frequency-domain characterization for 1\\/f processes, the wavelet expansion's role as a Karhunen-Loeve-type expansion for 1\\/f processes is developed. As an illustration of potential, it is shown that wavelet-based representations naturally lead to highly efficient solutions to

G. W. Wornell

1993-01-01

105

Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired. PMID:23669913

Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

2013-05-01

106

Atmospheric density estimation using satellite precision orbit ephemerides

NASA Astrophysics Data System (ADS)

The current atmospheric density models are not capable enough to accurately model the atmospheric density, which varies continuously in the upper atmosphere mainly due to the changes in solar and geomagnetic activity. Inaccurate atmospheric modeling results in erroneous density values that are not accurate enough to calculate the drag estimates acting on a satellite, thus leading to errors in the prediction of satellite orbits. This research utilized precision orbit ephemerides (POE) data from satellites in an orbit determination process to make corrections to existing atmospheric models, thus resulting in improved density estimates. The work done in this research made corrections to the Jacchia family atmospheric models and Mass Spectrometer Incoherent Scatter (MSIS) family atmospheric models using POE data from the Ice, Cloud and Land Elevation Satellite (ICESat) and the Terra Synthetic Aperture Radar-X Band (TerraSAR-X) satellite. The POE data obtained from these satellites was used in an orbit determination scheme which performs a sequential filter/smoother process to the measurements and generates corrections to the atmospheric models to estimate density. This research considered several days from the year 2001 to 2008 encompassing all levels of solar and geomagnetic activity. Density and ballistic coefficient half-lives with values of 1.8, 18, and 180 minutes were used in this research to observe the effect of these half-life combinations on density estimates. This research also examined the consistency of densities derived from the accelerometers of the Challenging Mini Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites by Eric Sutton, from the University of Colorado. The accelerometer densities derived by Sutton were compared with those derived by Sean Bruinsma from CNES, Department of Terrestrial and Planetary Geodesy, France. The Sutton densities proved to be nearly identical to the Bruinsma densities for all the cases considered in this research, thus suggesting that Sutton densities can be used as a substitute for Bruinsma densities in validating the POE density estimates for future work. Density estimates were found using the ICESat and TerraSAR-X POE data by generating corrections to the CIRA-72 and NRLMSISE-00 atmospheric density models. The ICESat and TerraSAR-X POE density estimates obtained were examined and studied by comparing them with the density estimates obtained using CHAMP and GRACE POE data. The trends in how POE density estimates varied for all four satellites were found to be the same or similar. The comparisons were made for different baseline atmospheric density models, different density and ballistic coefficient correlated half-lives, and for varying levels of solar and geomagnetic activity. The comparisons in this research help in understanding the variation of density estimates for various satellites with different altitudes and orbits.

Arudra, Anoop Kumar

107

Optimum nonparametric estimation of population density based on ordered distances

The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

1982-01-01

108

Evaluation of wolf density estimation from radiotelemetry data

Density estimation of wolves (Canis lupus) requires a count of individuals and an estimate of the area those individuals inhabit. With radiomarked wolves, the count is straightforward but estimation of the area is more difficult and often given inadequate attention. The population area, based on the mosaic of pack territories, is influenced by sampling intensity similar to the estimation of individual home ranges. If sampling intensity is low, population area will be underestimated and wolf density will be inflated. Using data from studies in Denali National Park and Preserve, Alaska, we investigated these relationships using Monte Carlo simulation to evaluate effects of radiolocation effort and number of marked packs on density estimation. As the number of adjoining pack home ranges increased, fewer relocations were necessary to define a given percentage of population area. We present recommendations for monitoring wolves via radiotelemetry.

Burch, J.W.; Adams, L.G.; Follmann, E.H.; Rexstad, E.A.

2005-01-01

109

Estimates of transition densities for Brownian motion on nested fractals

Summary We obtain upper and lower bounds for the transition densities of Brownian motion on nested fractals. Compared with the estimate on the Sierpinski gasket, the results require the introduction of a new exponent,dJ, related to the “shortest path metric” and “chemical exponent” on nested fractals. Further, Hölder order of the resolvent densities, sample paths and local times are obtained.

Takashi Kumagai

1993-01-01

110

Solving inverse problems using an EM approach to density estimation

Âjoint planar arm, the acoustics of a fourÂtube articulatory model, and the localization of multiple objects from sensor data. The learning algorithm presented differs from regressionÂbased algorithms, to estimate the vector function y = f(x) the joint density P (x; y) is estimated and, given a particular input

Ghahramani, Zoubin

111

Improved 3D wavelet-based de-noising of fMRI data

NASA Astrophysics Data System (ADS)

Functional MRI (fMRI) data analysis deals with the problem of detecting very weak signals in very noisy data. Smoothing with a Gaussian kernel is often used to decrease noise at the cost of losing spatial specificity. We present a novel wavelet-based 3-D technique to remove noise in fMRI data while preserving the spatial features in the component maps obtained through group independent component analysis (ICA). Each volume is decomposed into eight volumetric sub-bands using a separable 3-D stationary wavelet transform. Each of the detail sub-bands are then treated through the main denoising module. This module facilitates computation of shrinkage factors through a hierarchical framework. It utilizes information iteratively from the sub-band at next higher level to estimate denoised coefficients at the current level. These de-noised sub-bands are then reconstructed back to the spatial domain using an inverse wavelet transform. Finally, the denoised group fMRI data is analyzed using ICA where the data is decomposed in to clusters of functionally correlated voxels (spatial maps) as indicators of task-related neural activity. The proposed method enables the preservation of shape of the actual activation regions associated with the BOLD activity. In addition it is able to achieve high specificity as compared to the conventionally used FWHM (full width half maximum) Gaussian kernels for smoothing fMRI data.

Khullar, Siddharth; Michael, Andrew M.; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

2011-03-01

112

Non-local crime density estimation incorporating housing information

Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

2014-01-01

113

Non-local crime density estimation incorporating housing information.

Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H(1) Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

Woodworth, J T; Mohler, G O; Bertozzi, A L; Brantingham, P J

2014-11-13

114

NONPARAMETRIC DENSITY ESTIMATION IN COMPOUND POISSON PROCESS USING CONVOLUTION POWER ESTIMATORS.

NONPARAMETRIC DENSITY ESTIMATION IN COMPOUND POISSON PROCESS USING CONVOLUTION POWER ESTIMATORS. F. COMTE1 , C. DUVAL2 , AND V. GENON-CATALOT1 Abstract. Consider a compound Poisson process which estimator. Keywords. Convolution. Compound Poisson process. Inverse problem. Nonparametric estima- tion

Paris-Sud XI, UniversitÃ© de

115

NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES

We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ? ?d and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e?y for y ? ?; in this case, the resulting class of densities P(e?y)={p=exp(?g):gis convex}is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p? exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e?y) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x0 under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877

Seregin, Arseni; Wellner, Jon A.

2011-01-01

116

NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES.

We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ? ?(d) and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e(-y) for y ? ?; in this case, the resulting class of densities [Formula: see text]is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class.We first investigate when the maximum likelihood estimator p? exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y).We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e(-y)) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x(0) under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877

Seregin, Arseni; Wellner, Jon A

2010-12-01

117

Density-ratio robustness in dynamic state estimation

NASA Astrophysics Data System (ADS)

The filtering problem is addressed by taking into account imprecision in the knowledge about the probabilistic relationships involved. Imprecision is modelled in this paper by a particular closed convex set of probabilities that is known with the name of density ratio class or constant odds-ratio (COR) model. The contributions of this paper are the following. First, we shall define an optimality criterion based on the squared-loss function for the estimates derived from a general closed convex set of distributions. Second, after revising the properties of the density ratio class in the context of parametric estimation, we shall extend these properties to state estimation accounting for system dynamics. Furthermore, for the case in which the nominal density of the COR model is a multivariate Gaussian, we shall derive closed-form solutions for the set of optimal estimates and for the credible region. Third, we discuss how to perform Monte Carlo integrations to compute lower and upper expectations from a COR set of densities. Then we shall derive a procedure that, employing Monte Carlo sampling techniques, allows us to propagate in time both the lower and upper state expectation functionals and, thus, to derive an efficient solution of the filtering problem. Finally, we empirically compare the proposed estimator with the Kalman filter. This shows that our solution is more robust to the presence of modelling errors in the system and that, hence, appears to be a more realistic approach than the Kalman filter in such a case.

Benavoli, Alessio; Zaffalon, Marco

2013-05-01

118

Nonparametric probability density estimation by optimization theoretic techniques

NASA Technical Reports Server (NTRS)

Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

Scott, D. W.

1976-01-01

119

Estimating Density Using Precision Satellite Orbits from Multiple Satellites

NASA Astrophysics Data System (ADS)

This article examines atmospheric densities estimated using precision orbit ephemerides (POE) from several satellites including CHAMP, GRACE, and TerraSAR-X. The results of the calibration of atmospheric densities along the CHAMP and GRACE-A orbits derived using POEs with those derived using accelerometers are compared for various levels of solar and geomagnetic activity to examine the consistency in calibration between the two satellites. Densities from CHAMP and GRACE are compared when GRACE is orbiting nearly directly above CHAMP. In addition, the densities derived simultaneously from CHAMP, GRACE-A, and TerraSAR-X are compared to the Jacchia 1971 and NRLMSISE-00 model densities to observe altitude effects and consistency in the offsets from the empirical models among all three satellites.

McLaughlin, Craig A.; Lechtenberg, Travis; Fattig, Eric; Krishna, Dhaval Mysore

2012-06-01

120

Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters. PMID:18784025

Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

2008-10-01

121

RICE UNIVERSITY Multiscale Analysis for Intensity and Density Estimation

RICE UNIVERSITY Multiscale Analysis for Intensity and Density Estimation by Rebecca M. Willett was always revitalizing. I am also grateful to Dr. Eric Kolaczyk of Boston University for providing Gamma Ray of Medicine, for providing confocal microscopy data, and to Dr. Reginald Dufour and Brent Buckalew of the Rice

Willett, Rebecca

122

Practical Bayesian Density Estimation Using Mixtures Of Normals

this paper, wepropose some solutions to these problems. Our goal is to come up with a simple, practicalmethod for estimating the density. This is an interesting problem in its own right, as wellas a first step towards solving other inference problems, such as providing more flexibledistributions in hierarchical models.To see why the posterior is improper under the usual reference prior,

Kathryn Roeder

1995-01-01

123

Density estimation in tiger populations: combining information for strong inference

A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

2012-01-01

124

Improved Fast Gauss Transform and Efficient Kernel Density Estimation

Abstract Evaluating sums of multivariate Gaussians is a common computational task in computer vision and pattern recogni - tion, including in the general and powerful kernel density estimation technique The quadratic computational com - plexity of the summation is a significant barrier to the scal - ability of this algorithm to practical applications The fast Gauss transform (FGT) has successfully

Changjiang Yang; Ramani Duraiswami; Nail A. Gumerov; Larry S. Davis

2003-01-01

125

Solving inverse problems using an EM approach to density estimation

, the acoustics of a four-tube articulatory model, and the localization of multiple objects from sensor data then be used to form any input/output map. Thus, to estimate the vector function y = f(x) the joint density P

Ghahramani, Zoubin

126

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

127

The Effect of Lidar Point Density on LAI Estimation

NASA Astrophysics Data System (ADS)

Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful in forest inventory applications and tree health evaluations. However, such high-density data is often not available over large areas. In this study we progressively downsampled the high-density discrete lidar data and evaluated the effect on LAI estimation. The AccuPAR data was used as validation and results were compared to existing LAI metrics. This will enable us to determine the minimum point density required for airborne lidar LAI retrieval. Preliminary results show that the data may be substantially thinned to estimate site-level LAI. More detailed results will be presented at the conference.

Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

2013-12-01

128

A classification technology is presented that uses a Wavelet based feature extractor and a Hidden Markov Model (HMM) to classify simulated and real radar signals from six classes of targets: person, tracked vehicles, wheeled vehicles, helicopters, propeller aircrafts and clutter (no match). Similar to techniques that have been well proven in speech and image recognition, the time-varying nature of radar

G. Kouemou; F. Opitz

2008-01-01

129

Revisiting multifractality of high-resolution temporal rainfall using a wavelet-based formalism

We reexamine the scaling structure of temporal rainfall using wavelet-based methodologies which, as we demonstrate, offer important advantages compared to the more traditional multifractal approaches such as box counting and structure function techniques. In particular, we explore two methods based on the Continuous Wavelet Transform (CWT) and the Wavelet Transform Modulus Maxima (WTMM): the partition function method and the newer

V. Venugopal; Stéphane G. Roux; Efi Foufoula-Georgiou; Alain Arneodo

2006-01-01

130

A comprehensive training for wavelet-based RBF classifier for power quality disturbances

In this paper we demonstrate that the dominant frequencies and Lipschitz exponents in nonstationary and transitory power quality disturbances efficiently extracted from their wavelet transform modulus maxima (WTMM) in the time-scale domain can serve as powerful discriminating features for wavelet-based classification of these disturbances. We also propose a comprehensive \\

T. A. Hoang; D. T. Nguyen

2002-01-01

131

ECG Compression Algorithms Comparisons among EZW, Modified EZW and Wavelet Based Linear Prediction

ECG Compression Algorithms Comparisons among EZW, Modified EZW and Wavelet Based Linear Prediction 74 6 Recommendation for Future Research 78 List of References 79 Appendices 81 Appendix 1 ECG Signal.............................................87 #12;iv List of Tables 2.1 Variance Comparisons (ECG 16265

Fowler, Mark

132

Wavelet Based Homogenization of a 2 Dimensional Elliptic Y. Capdeboscq and M.S. Vogelius

Wavelet Based Homogenization of a 2 Dimensional Elliptic Problem. Y. Capdeboscq and M.S. Vogelius to the ones given by the theory of homogenization, in the cases where explicit formulas are known. Finally we present numerical experiments to document the effectiveness of this explicit homogenization approach

Paris-Sud XI, UniversitÃ© de

133

A wavelet based technique for multiple point targets detec- tion in InfraRed image sequences in presence of clutter is proposed. Most existing approaches assume target of sev- eral pixels size or of Gaussian shape with variance of 1.5 pixel. We develop detection and tracking algorithm for sin- gle pixel targets. We also propose a modified pipeline al- gorithm for tracking

Mukesh A. Zaveri; Anant Malewar; Shabbir N. Merchant; Uday B. Desai

2002-01-01

134

Multiresolution analysis on zero-dimensional Abelian groups and wavelets bases

For a locally compact zero-dimensional group (G,+{sup .}), we build a multiresolution analysis and put forward an algorithm for constructing orthogonal wavelet bases. A special case is indicated when a wavelet basis is generated from a single function through contractions, translations and exponentiations. Bibliography: 19 titles.

Lukomskii, Sergei F [Saratov State University, Saratov (Russian Federation)

2010-06-29

135

Wavelet-based index of magnetic storm activity P. Kokoszka,1

Wavelet-based index of magnetic storm activity A. Jach,1 P. Kokoszka,1 J. Sojka,2 and L. Zhu2 describes is actually the overall magnetic effect of storm activity at the low-latitude and midlatitude subtracted, the remainder is believed to describe the storm related magnetic activity. Constructing the quiet

Kokoszka, Piotr

136

Applying wavelet-based hidden Markov tree to enhancing performance of process monitoring

In this paper, wavelet-based hidden Markov tree (HMT) models is proposed to enhance the conventional time-scale only statistical process model (SPC) for process monitoring. HMT in the wavelet domain cannot only analyze the measurements at multiple scales in time and frequency but also capture the statistical behavior of real world measurements in these different scales. The former can provide better

Junghui Chen; Wang-Jung Chang

2005-01-01

137

ISI/ICI COMPARISON OF DMT AND WAVELET BASED MCM SCHEMES FOR TIMEINVARIANT CHANNELS

ISI/ICI COMPARISON OF DMT AND WAVELET BASED MCM SCHEMES FOR TIMEÂINVARIANT CHANNELS Maria Charina environments. Currently used FFT based MCM schemes (DMT) outperform those based on wavelets regardless of which DMT of OFDM is standardized for the asymmetrical transmission over digital subscriber line) systems

Pfander, GÃ¶tz

138

WAVELET-BASED ULTRASOUND IMAGE DENOISING USING AN ALPHA-STABLE PRIOR PROBABILITY MODEL

details, better than exist- ing methods. 1. INTRODUCTION Speckle phenomena affect all coherent imagingWAVELET-BASED ULTRASOUND IMAGE DENOISING USING AN ALPHA-STABLE PRIOR PROBABILITY MODEL Alin Achim University of Patras 261 10 Rio, GREECE tsakalid@ee.upatras.gr ABSTRACT Ultrasonic images are generally

Tsakalides, Panagiotis

139

Implementation of Wavelet-Based Controller for Battery Storage System of Hybrid Electric Vehicles

This paper presents a wavelet-based multiresolution proportional integral derivative (MRPID) controller for temper- ature control of the ambient air of battery storage system of the hybrid electric vehicles. In the proposed wavelet MRPID con- troller, the discrete wavelet transform (DWT) is used to decompose temperature error into frequency components at various resolu- tion of the error signal. The wavelet transformed

M. A. S. K. Khan; M. Azizur Rahman

2011-01-01

140

Trip related falls are a prevalent problem in the elderly. Early identification of at-risk gait can help prevent falls and injuries. The main aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum foot clearance (MFC)] in comparison to MFC histogram plot analysis in extracting features for developing a model

A. H. Khandoker; Daniel T. H. Lai; Rezaul K. Begg; Marimuthu Palaniswami

2007-01-01

141

RESEARCH ARTICLE A novel wavelet-based thresholding method for the

RESEARCH ARTICLE A novel wavelet-based thresholding method for the pre-processing of mass-processing of the mass spectrom- etry data. Statistical results are typically strongly affected by the specific pre from the mass spectra. Wavelet denoising techniques are a standard method for denoising. Existing

Vannucci, Marina

142

Wavelet-based medical infrared image noise reduction using local model for signal and noise

A BST R A C T This paper presents a new wavelet-based denoising method for medical infrared images. Since the dominant noise in infrared images is signal dependent we use local models for statistical properties of (noise-free) signal and noise. In this base, the noise variance is locally modeled as a function of the image intensity using the parameters of

Raheleh Kafieh; Hossein Rabbani

2011-01-01

143

Can modeling improve estimation of desert tortoise population densities?

The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

Nussear, K. E.; Tracy, C. R.

2007-01-01

144

Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics.

We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D denoising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional denoising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the denoised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of denoised wavelet coefficients for each voxel. Given the de-correlated nature of these denoised wavelet coefficients, it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules: First, in the analysis module we combine a new 3-D wavelet denoising approach with signal separation properties of ICA in the wavelet domain. This step helps obtain an activation component that corresponds closely to the true underlying signal, which is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing+spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positive voxels. PMID:21034833

Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A; Calhoun, Vince D

2011-02-14

145

Feature selection for neural networks using Parzen density estimator

NASA Technical Reports Server (NTRS)

A feature selection method for neural networks is proposed using the Parzen density estimator. A new feature set is selected using the decision boundary feature selection algorithm. The selected feature set is then used to train a neural network. Using a reduced feature set, an attempt is made to reduce the training time of the neural network and obtain a simpler neural network, which further reduces the classification time for test data.

Lee, Chulhee; Benediktsson, Jon A.; Landgrebe, David A.

1992-01-01

146

A contact algorithm for density-based load estimation.

An algorithm, which includes contact interactions within a joint, has been developed to estimate the dominant loading patterns in joints based on the density distribution of bone. The algorithm is applied to the proximal femur of a chimpanzee, gorilla and grizzly bear and is compared to the results obtained in a companion paper that uses a non-contact (linear) version of the density-based load estimation method. Results from the contact algorithm are consistent with those from the linear method. While the contact algorithm is substantially more complex than the linear method, it has some added benefits. First, since contact between the two interacting surfaces is incorporated into the load estimation method, the pressure distributions selected by the method are more likely indicative of those found in vivo. Thus, the pressure distributions predicted by the algorithm are more consistent with the in vivo loads that were responsible for producing the given distribution of bone density. Additionally, the relative positions of the interacting bones are known for each pressure distribution selected by the algorithm. This should allow the pressure distributions to be related to specific types of activities. The ultimate goal is to develop a technique that can predict dominant joint loading patterns and relate these loading patterns to specific types of locomotion and/or activities. PMID:16439233

Bona, Max A; Martin, Larry D; Fischer, Kenneth J

2006-01-01

147

Structural Reliability Using Probability Density Estimation Methods Within NESSUS

NASA Technical Reports Server (NTRS)

A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

2003-01-01

148

psd: Adaptive, sine multitaper power spectral density estimation for R

NASA Astrophysics Data System (ADS)

We present an R package for computing univariate power spectral density estimates with little or no tuning effort. We employ sine multitapers, allowing the number to vary with frequency in order to reduce mean square error, the sum of squared bias and variance, at each point. The approximate criterion of Riedel and Sidorenko (1995) is modified to prevent runaway averaging that otherwise occurs when the curvature of the spectrum goes to zero. An iterative procedure refines the number of tapers employed at each frequency. The resultant power spectra possess significantly lower variances than those of traditional, non-adaptive estimators. The sine tapers also provide useful spectral leakage suppression. Resolution and uncertainty can be estimated from the number of degrees of freedom (twice the number of tapers).

Barbour, Andrew J.; Parker, Robert L.

2014-02-01

149

Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

NASA Technical Reports Server (NTRS)

The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

Mahmoud, Saad; Hi, Jianjun

2012-01-01

150

Effect of Random Clustering on Surface Damage Density Estimates

Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

Matthews, M J; Feit, M D

2007-10-29

151

Estimation of probability densities using scale-free field theories

NASA Astrophysics Data System (ADS)

The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

Kinney, Justin B.

2014-07-01

152

Estimation of vibration power absorption density in human fingers.

The absorption of hand-transmitted vibration energy may be an etiological factor in vibration-induced disorders. The vibration power absorption density (VPAD) may be a better measure of energy than the total power absorption of the hand-arm system. The objectives of the present study are to develop a method to estimate the average absorption density in the fingers and to investigate its basic characteristics. Ten healthy male subjects were used in this study. The biodynamic response of the fingers in a power grip subjected to a broad-band random excitation was measured under three grip forces (15, 30, 50 N) and three push forces (35, 45, 50 N). The response was used to estimate the total finger energy absorption. The response, together with the finger volume, was also used to estimate the amount of tissue effectively involved in the absorption. Then, the average VPAD under constant-acceleration, constant-power density, constant-velocity vibration spectra, and 20 tool vibration spectra were calculated. The correlations between the VPAD and the unweighted and weighted accelerations (ISO 5349-1, 2001) were also examined. The VPAD depends on both the characteristics of the vibration spectrum and the biodynamic response of the finger-hand-arm system. The biodynamic response generally plays a more important role in determining the VPAD in the middle frequency range (31.5-400 Hz) than those at the low and high ends. The applied force significantly affected the VPAD. The finger VPAD was highly correlated to the unweighted acceleration. The average VPAD can be determined using the proposed experimental method. It can serve as an alternative tool to quantify the severity of the vibration exposure for studying vibration-induced finger disorders. PMID:16248315

Dong, Ren G; Wu, John Z; Welcome, Daniel E; McDowell, Thomas W

2005-10-01

153

Estimating low-density snowshoe hare populations using fecal pellet counts

Snowshoe hare (Lepus americanus) populations found at high densities can be estimated using fecal pellet densities on rectangular plots, but this method has yet to be evaluated for low-density populations. We further tested the use of fecal pellet plots for estimating hare populations by correlating pellet densities with estimated hare numbers on 12 intensive study areas in Idaho; pellet counts

Dennis L. Murray; James D. Roth; Ethan Ellsworth; Aaron J. Wirsing; Todd D. Steury

2002-01-01

154

Estimates of leaf vein density are scale dependent.

Leaf vein density (LVD) has garnered considerable attention of late, with numerous studies linking it to the physiology, ecology, and evolution of land plants. Despite this increased attention, little consideration has been given to the effects of measurement methods on estimation of LVD. Here, we focus on the relationship between measurement methods and estimates of LVD. We examine the dependence of LVD on magnification, field of view (FOV), and image resolution. We first show that estimates of LVD increase with increasing image magnification and resolution. We then demonstrate that estimates of LVD are higher with higher variance at small FOV, approaching asymptotic values as the FOV increases. We demonstrate that these effects arise due to three primary factors: (1) the tradeoff between FOV and magnification; (2) geometric effects of lattices at small scales; and; (3) the hierarchical nature of leaf vein networks. Our results help to explain differences in previously published studies and highlight the importance of using consistent magnification and scale, when possible, when comparing LVD and other quantitative measures of venation structure across leaves. PMID:24259686

Price, Charles A; Munro, Peter R T; Weitz, Joshua S

2014-01-01

155

Direct Density-Ratio Estimation with Dimensionality Reduction via Hetero-Distributional Subspace, and conditional probability estimation. In this paper, we propose a new density-ratio estimator which incorporates dimensionality reduction into the density- ratio estimation procedure. Through experiments, the proposed method

Sugiyama, Masashi

156

Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

NASA Astrophysics Data System (ADS)

T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbie?, A.; Opolski, G.; Maniewski, R.

2011-01-01

157

Estimation of the Space Density of Low Surface Brightness Galaxies

The space density of low surface brightness and tiny gas-rich dwarf galaxies are estimated for two recent catalogs: The Arecibo Survey of Northern Dwarf and Low Surface Brightness Galaxies (Schneider, Thuan, Magri & Wadiak 1990) and The Catalog of Low Surface Brightness Galaxy, List II (Schombert, Bothun, Schneider & McGaugh 1992). The goals are (1) to evaluate the additions to the completeness of the Fisher and Tully (1981) 10 Mpc Sample and (2) to estimate whether the density of galaxies contained in the new catalogs adds a significant amount of neutral gas mass to the the inventory of HI already identified in the nearby, present-epoch universe. Although tiny dwarf galaxies (M_HI < ~10^7 solar masses) may be the most abundant type of extragalactic stellar system in the nearby Universe, if the new catalogs are representative, the LSB and dwarf populations they contain make only a small addition (<10%) to the total HI content of the local Universe and probably constitute even smaller fractions of its luminous and dynamical mass.

F. H. Briggs

1997-02-24

158

Research of the wavelet based ECW remote sensing image compression technology

NASA Astrophysics Data System (ADS)

This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

2007-11-01

159

Wavelet-based efficient simulation of electromagnetic transients in a lightning protection system

In this paper, a wavelet-based efficient simulation of electromagnetic transients in a lightning protection systems (LPS) is presented. The analysis of electromagnetic transients is carried out by employing the thin-wire electric field integral equation in frequency domain. In order to easily handle the boundary conditions of the integral equation, semiorthogonal compactly supported spline wavelets, constructed for the bounded interval [0,1],

Guido Ala; Maria L. Di Silvestre; Elisa Francomano; Adele Tortorici

2003-01-01

160

Optimal zonal wavelet-based ECG data compression for a mobile telecardiology system

A new integrated design approach for an optimal zonal wavelet-based ECG data compression (OZWC) method for a mobile telecardiology model is presented. The hybrid implementation issues of this wavelet method with a GSM-based mobile telecardiology system are also introduced. The performance of the mobile system with compressed ECG data segments selected from the MIT-BIH arrhythmia database is evaluated in terms

Robert S. H. Istepanian; Arthur A. Petrosian

2000-01-01

161

Wavelet-Based fMRI Statistical Analysis and Spatial Interpretation: A Unifying Approach

Wavelet-based statistical analysis methods for fMRI are able to detect brain activity without smoothing the data. Typically, the statistical inference is performed in the wavelet domain by testing the t-values of each wavelet coef- ficient; subsequently, an activity map is reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the

Dimitri Van De Ville; Thierry Blu; Michael Unser

2004-01-01

162

A New Wavelet Based Multi-focus Image Fusion Scheme and Its Application on Optical Microscopy

Multi-focus image fusion is a process of combining two or more partially defocused images into a new image with all interested objects sharply imaged. In this paper, after reviewing the multi-focus image fusion techniques, a wavelet based fusion scheme with new image activity level measurement is presented. The proposed multi-resolution image fusion technique includes three steps: first, multi-resolution discrete wavelet

Yu Song; Mantian Li; Qingling Li; Lining Sun

2006-01-01

163

Rotation-invariant texture retrieval using wavelet-based hidden Markov trees

In this paper, we present a novel approach for rotation-invariant texture retrieval using multistated wavelet-based hidden Markov trees (MWHMT). We propose a new model to capture statistical dependencies across three independent wavelet subbands. The proposed approach has been applied to CBIR application, rotation-invariant texture retrieval. The feature extraction of the texture is then performed using the signature of the texture,

Venkateswara Rao Rallabandi; V. P. Subramanyam Rallabandi

2008-01-01

164

State-of-the-Art and Trends in Scalable Video Compression With Wavelet-Based Approaches

Scalable video coding (SVC) differs form traditional single point approaches mainly because it allows to encode in a unique bit stream several working points corresponding to different quality, picture size and frame rate. This work describes the current state-of-the-art in SVC, focusing on wavelet based motion-compensated approaches (WSVC). It reviews individual components that have been designed to address the problem

Nicola Adami; Alberto Signoroni; Riccardo Leonardi

2007-01-01

165

Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

2008-01-01

166

Wavelet-Based Real-Time Diagnosis of Complex Systems

NASA Technical Reports Server (NTRS)

A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

Gulati, Sandeep; Mackey, Ryan

2003-01-01

167

Nonparametric estimation of multivariate scale mixtures of uniform densities.

Suppose that U = (U(1), … , U(d)) has a Uniform ([0, 1](d)) distribution, that Y = (Y(1), … , Y(d)) has the distribution G on [Formula: see text], and let X = (X(1), … , X(d)) = (U(1)Y(1), … , U(d)Y(d)). The resulting class of distributions of X (as G varies over all distributions on [Formula: see text]) is called the Scale Mixture of Uniforms class of distributions, and the corresponding class of densities on [Formula: see text] is denoted by [Formula: see text]. We study maximum likelihood estimation in the family [Formula: see text]. We prove existence of the MLE, establish Fenchel characterizations, and prove strong consistency of the almost surely unique maximum likelihood estimator (MLE) in [Formula: see text]. We also provide an asymptotic minimax lower bound for estimating the functional f ? f(x) under reasonable differentiability assumptions on f ? [Formula: see text] in a neighborhood of x. We conclude the paper with discussion, conjectures and open problems pertaining to global and local rates of convergence of the MLE. PMID:22485055

Pavlides, Marios G; Wellner, Jon A

2012-05-01

168

WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering

Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. Availability: WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. Contact: statliuzhi@xmu.edu.cn; ahmed.abbas@kaust.edu.sa; majing@ust.hk; xin.gao@kaust.edu.sa PMID:22328784

Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin

2012-01-01

169

NASA Astrophysics Data System (ADS)

In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as they circle the globe. Recently, it was demonstrated that the present global CTMs implemented on quasi-uniform grids are incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. On the contrary, the adaptive wavelet technique is shown to produce highly accurate numerical solutions at a relatively low computational cost. It is demonstrated that the developed WAMR method has significant advantages over conventional non-adaptive computational techniques in terms of accuracy and computational cost for calculations of atmospheric chemical transport numerical. The simulations show excellent ability of the algorithm to adapt the computational grid to a solution containing different scales at different spatial locations so as to produce accurate results at a relatively low computational cost. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

Rastigejev, Y.; Semakin, A. N.

2012-12-01

170

Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

NASA Astrophysics Data System (ADS)

Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

Rastigejev, Y.

2011-12-01

171

Wavelet-based localization of oscillatory sources from magnetoencephalography data.

Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy. PMID:22410322

Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

2014-08-01

172

Estimating tropical-forest density profiles from multibaseline interferometric SAR

NASA Technical Reports Server (NTRS)

Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

2006-01-01

173

An Adaptive Background Subtraction Method Based on Kernel Density Estimation

In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

Lee, Jeisung; Park, Mignon

2012-01-01

174

EFFICIENT NONPARAMETRIC DENSITY ESTIMATION ON THE SPHERE WITH APPLICATIONS IN FLUID MECHANICS

and compare the computational efficiency of our method with kernel based estimators. Key words. probability of interest falls on the surface of the sphere. Accurate and fast estimation of probability density functions. Nonparametric density estimation is the problem of the es- timation of the values of a probability density

Egecioglu, Ã?mer

175

Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density

Laura B. HansonA; James B. GrandB; Michael S. MitchellC; D. Buck; Bill D. SparklinD; Stephen S. DitchkoffA

176

Change-in-ratio density estimator for feral pigs is less biased than closed mark–recapture estimates

Abstract. Closed-population capture–mark–recapture (CMR) methods,can produce,biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for

Laura B. Hanson; James B. Grand; Michael S. Mitchell; D. Buck Jolley; Bill D. Sparklin; Stephen S. Ditchkoff

2008-01-01

177

NASA Astrophysics Data System (ADS)

The objective of this study is to bring out the errors introduced during construction which are overlooked during the physical verification of the bridge. Such errors can be pointed out if the symmetry of the structure is challenged. This paper thus presents the study of downstream and upstream truss of newly constructed steel bridge using time-frequency and wavelet-based approach. The variation in the behavior of truss joints of bridge with variation in the vehicle speed has been worked out to determine their flexibility. The testing on the steel bridge was carried out with the same instrument setup on both the upstream and downstream trusses of the bridge at two different speeds with the same moving vehicle. The nodal flexibility investigation is carried out using power spectral density, short-time Fourier transform, and wavelet packet transform with respect to both the trusses and speed. The results obtained have shown that the joints of both upstream and downstream trusses of the bridge behave in a different manner even if designed for the same loading due to constructional variations and vehicle movement, in spite of the fact that the analytical models present a simplistic model for analysis and design. The difficulty of modal parameter extraction of the particular bridge under study increased with the increase in speed due to decreased excitation time.

Walia, Suresh Kumar; Patel, Raj Kumar; Vinayak, Hemant Kumar; Parti, Raman

2013-12-01

178

Learning Multisensory Integration and Coordinate Transformation via Density Estimation

Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588

Sabes, Philip N.

2013-01-01

179

On the analysis of wavelet-based approaches for print mottle artifacts

NASA Astrophysics Data System (ADS)

Print mottle is one of several attributes described in ISO/IEC DTS 24790, a draft technical specification for the measurement of image quality for monochrome printed output. It defines mottle as aperiodic fluctuations of lightness less than about 0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC 13660. In a previous publication, we introduced a modification to the ISO/IEC 13660 mottle measurement algorithm that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations including those introduced by print grain artifacts. This modification has improved the algorithm's correlation with the subjective evaluation of experts who rated the severity of printed mottle artifacts. Seeking to improve upon the mottle algorithm in ISO/IEC 13660, the ISO 24790 committee evaluated several mottle metrics. This led to the selection of the above wavelet-based approach as the top candidate algorithm for inclusion in a future ISO/IEC standard. Recent experimental results from the ISO committee showed higher correlation between the wavelet-based approach and the subjective evaluation conducted by the ISO committee members based upon 25 samples covering a variety of printed mottle artifacts. In addition, we introduce an alternative approach for measuring mottle defects based on spatial frequency analysis of wavelet- filtered images. Our goal is to establish a link between the spatial-based mottle (ISO/IEC DTS 24790) approach and its equivalent frequency-based one in light of Parseval's theorem. Our experimental results showed a high correlation between the spatial and frequency based approaches.

Eid, Ahmed H.; Cooper, Brian E.

2014-01-01

180

Estimating the mass density of neutral gas at z<1

NASA Astrophysics Data System (ADS)

We use the relationships between galactic H i mass and B-band luminosity determined by Rao & Briggs to recalculate the mass density of neutral gas at the present epoch based on more recent measures of the galaxy luminosity function than were available to those authors. We find Omega_gas(z=0)~=5x10^-4 in good agreement with the original Rao & Briggs value, suggesting that this quantity is now reasonably secure. We than show that, if the scaling between H i mass and B-band luminosity has remained approximately consistent since z=1, the evolution of the luminosity function found by the Canada-France Redshift Survey translates to an increase of Omega_gas by a factor of ~3 at z=0.5-1. A similar value is obtained quite independently from consideration of the luminosity function of Mg ii absorbers at z=0.65. By combining these new estimates with data from damped Lyman alpha systems at higher redshift, it is possible to assemble a rough sketch of the evolution of Omega_gas over the last 90 per cent of the age of the Universe. The consumption of H i gas with time is in broad agreement with models of chemical evolution which include the effects of dust, although more extensive samples of damped Lyman alpha systems at low and intermediate redshift are required for a quantitative assessment of the dust bias.

Natarajan, Priyamvada; Pettini, Max

1997-10-01

181

A wavelet-based index of storm activity (WISA) has been recently developed [Jach, A., Kokoszka, P., Sojka, L., Zhu, L., 2006. Wavelet-based index of magnetic storm activity. Journal of Geophysical Research 111, A09215, doi:10.1029\\/2006JA011635] to complement the traditional Dst index. The new index can be computed automatically by using the wavelet-based statistical procedure without human intervention on the selection of quiet

Zhonghua Xu; Lie Zhu; Jan Sojka; Piotr Kokoszka; Agnieszka Jach

2008-01-01

182

A novel 3D wavelet based filter for visualizing features in noisy biological data

We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

2005-01-05

183

A novel 3D wavelet-based filter for visualizing features in noisy biological data.

Summary We have developed a three-dimensional (3D) wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus de-noising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples, including low-contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens. PMID:16159339

Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

2005-08-01

184

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

185

ATMOSPHERIC DENSITY ESTIMATION USING SATELLITE PRECISION ORBIT EPHEMERIDES

The current atmospheric density models are not capable enough to accurately model the atmospheric density, which varies continuously in the upper atmosphere mainly due to the changes in solar and geomagnetic activity. ...

Arudra, Anoop Kumar

2011-04-22

186

/ Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transects that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% CI = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.KEY WORDS: Bias; Density; Distance sampling; Gray squirrel; Line transect; Sciurus carolinensis. PMID:9336490

Hein

1997-11-01

187

ccsd-00003898,version1-14Jan2005 KERNEL ESTIMATION OF DENSITY LEVEL SETS

of the density level set corresponding to a fixed probability for the law induced by f. Key-words : Kernel the problem of estimating the t-level set L(t) of a multivariate probability density f with support in IRkccsd-00003898,version1-14Jan2005 KERNEL ESTIMATION OF DENSITY LEVEL SETS Benot CADRE1 Laboratoire

Paris-Sud XI, UniversitÃ© de

188

Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE

1 Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE July 4, 2003 Abstract The nonparametric density estimation method proposed in this paper is computationally fast, capable of detect- ing density discontinuities and singularities at a very high resolution

Nowak, Robert

189

Analysis of damped tissue vibrations in time-frequency space: a wavelet-based approach.

There is evidence that vibrations of soft tissue compartments are not appropriately described by a single sinusoidal oscillation for certain types of locomotion such as running or sprinting. This paper discusses a new method to quantify damping of superimposed oscillations using a wavelet-based time-frequency approach. This wavelet-based method was applied to experimental data in order to analyze the decay of the overall power of vibration signals over time. Eight healthy subjects performed sprinting trials on a 30 m runway on a hard surface and a soft surface. Soft tissue vibrations were quantified from the tissue overlaying the muscle belly of the medial gastrocnemius muscle. The new methodology determines damping coefficients with an average error of 2.2% based on a wavelet scaling factor of 0.7. This was sufficient to detect differences in soft tissue compartment damping between the hard and soft surface. On average, the hard surface elicited a 7.02 s(-1) lower damping coefficient than the soft surface (p<0.05). A power spectral analysis of the muscular vibrations occurring during sprinting confirmed that vibrations during dynamic movements cannot be represented by a single sinusoidal function. Compared to the traditional sinusoidal approach, this newly developed method can quantify vibration damping for systems with multiple vibration modes that interfere with one another. This new time-frequency analysis may be more appropriate when an acceleration trace does not follow a sinusoidal function, as is the case with multiple forms of human locomotion. PMID:22995145

Enders, Hendrik; von Tscharner, Vinzenz; Nigg, Benno M

2012-11-15

190

The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

2012-01-01

191

The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

2012-01-01

192

We propose a density estimator based on penalized likelihood and total variation. Driven by a single smoothing parameter, the nonlinear estimator has the properties of being locally adap- tive and positive everywhere without a log- or root-transform. For the fast selection of the smoothing parameter we employ the sparsity '1 information criterion. Furthermore the estimated density has the advantage of

SYLVAIN SARDY; PAUL TSENG

2010-01-01

193

Demonstration of line transect methodologies to estimate urban gray squirrel density

Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.

Hein, E.W. [Los Alamos National Lab., NM (United States)] [Los Alamos National Lab., NM (United States)

1997-11-01

194

Deriving Atmospheric Density Estimates Using Satellite Precision Orbit Ephemerides

Current atmospheric models are incapable of properly modeling all of the density variations in the Earth’s upper atmosphere. Precision orbit ephemerides (POE) are utilized in an orbit determination process to generate ...

Hiatt, Andrew Timothy

2009-01-01

195

of magnetic storm activity (WISA) and its comparison to the Dst index Zhonghua Xu a,Ã?, Lie Zhu a , Jan Sojka: Geomagnetic indices Ring current Magnetic storms Wavelet transform a b s t r a c t A wavelet-based index. Wavelet-based index of magnetic storm activity. Journal of Geophysical Research 111, A09215, doi:10

Kokoszka, Piotr

196

A wavelet-based index of storm activities (WISA) has been recently developed (Jach et al., 2006) to complement the traditional Dst index. The new index can be computed automatically using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a

Z. Xu; L. Zhu; J. J. Sojka; P. Kokoszka; A. Jach

2006-01-01

197

Nonparametric Estimation of Mixed Partial Derivatives of a Multivariate Density

ERIC Educational Resources Information Center

A class of estimators which are asymptotically unbiased and mean square consistent are exhibited. Theorems giving necessary and sufficient conditions for uniform asymptotic unbiasedness and for mean square consistency are presented along with applications of the estimator to certain statistical problems. (Author/RC)

Singh, R. S.

1976-01-01

198

ESTIMATING DENSITY AND RELATIVE ABUNDANCE OF SLOTH BEARS

Estimates of abundance based on capturing, marking, and recapturing a small sample of bears are likely to be biased and imprecise, and indices of abundance are of little value if not verified with reliable population estimates. We captured and radiocollared 17 sloth bears (Melursus ursinus) in Royal Chitwan National Park, Nepal, but recapture rates were too low to derive a

DAVID L. GARSHELIS; East Highway; Grand Rapids; ANUP R. JOSHI; JAMES L. D. SMITH

199

A wavelet-based algorithm to estimate ocean wave group parameters from radar images

In recent years, new remote sensing techniques have been developed to measure two-dimensional (2-D) sea surface elevation fields. The availability of these data has led to the necessity to extend the classical analysis methods for one-dimensional (1-D) buoy time series to two dimensions. This paper is concerned with the derivation of group parameters from 2-D sea surface elevation fields using

A. Niedermeier; J. C. N. Borge; S. Lehner; J. Schultz-Stellenfleth

2005-01-01

200

Digital Radiographic Image Denoising Via Wavelet-Based Hidden Markov Model Estimation

This paper presents a technique for denoising digital radiographic images based upon the wavelet-domain Hidden Markov tree (HMT) model. The method uses the Anscombe’s transformation to adjust the original image, corrupted by Poisson noise, to a Gaussian noise model. The image is then decomposed in different subbands of frequency and orientation responses using the dual-tree complex wavelet transform, and the

Ricardo J. Ferrari; Robin Winsor

2005-01-01

201

Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

Chen, Rongda; Wang, Ze

2013-01-01

202

Estimation of current density distribution under electrodes for external defibrillation

Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise. PMID:12537593

Krasteva, Vessela Tz; Papazov, Sava P

2002-01-01

203

An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

1998-11-01

204

In this paper, we have applied an efficient wavelet-based approximation method for solving the Fisher's type and the fractional Fisher's type equations arising in biological sciences. To the best of our knowledge, until now there is no rigorous wavelet solution has been addressed for the Fisher's and fractional Fisher's equations. The highest derivative in the differential equation is expanded into Legendre series; this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the Fisher's equation and the fractional Fisher's equation are converted into a system of algebraic equations. Block-pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24908255

Rajaraman, R; Hariharan, G

2014-07-01

205

Bayesian Analysis of Mass Spectrometry Proteomics Data using Wavelet Based Functional Mixed Models

In this paper, we analyze MALDI-TOF mass spectrometry proteomic data using Bayesian wavelet-based functional mixed models. By modeling mass spectra as functions, this approach avoids reliance on peak detection methods. The flexibility of this framework in modeling non-parametric fixed and random effect functions enables it to model the effects of multiple factors simultaneously, allowing one to perform inference on multiple factors of interest using the same model fit, while adjusting for clinical or experimental covariates that may affect both the intensities and locations of peaks in the spectra. From the model output, we identify spectral regions that are differentially expressed across experimental conditions, while controlling the Bayesian FDR, in a way that takes both statistical and clinical significance into account. We apply this method to two cancer studies. PMID:17888041

Morris, Jeffrey S.; Brown, Philip J.; Herrick, Richard C.; Baggerly, Keith A.; Coombes, Kevin R.

2008-01-01

206

A wavelet-based multiresolution approach to large-eddy simulation of turbulence

NASA Astrophysics Data System (ADS)

The wavelet-based multiresolution analysis (MRA) technique is used to develop a modelling approach to large-eddy simulation (LES) and its associated subgrid closure problem. The LES equations are derived by projecting the Navier-Stokes (N-S) equations onto a hierarchy of wavelet spaces. A numerical framework is then developed for the solution of the large and the small-scale equations. This is done in one dimension, for the Burgers equation, and in three dimensions, for the N-S problem. The proposed methodology is assessed in a priori tests on an atmospheric turbulent time series and on data from direct numerical simulation. A posteriori (dynamic) tests are also carried out for decaying and force-driven Burgers turbulence.

de la Llave Plata, M.; Cant, R. S.

2010-10-01

207

Application of Wavelet-based Active Power Filter in Accelerator Magnet Power Supply

As modern accelerators demand excellent stability to magnet power supply (PS), it is necessary to decrease harmonic currents passing magnets. Aim at depressing rappel current from PS in Beijing electron-positron collider II, a wavelet-based active power filter (APF) is proposed in this paper. APF is an effective device to improve the quality of currents. As a countermeasure to these harmonic currents, the APF circuit generates a harmonic current, countervailing harmonic current from PS. An active power filter based on wavelet transform is proposed in this paper. Discrete wavelet transform is used to analyze the harmonic components in supply current, and active power filter circuit works according to the analysis results. At end of this paper, the simulation and experiment results are given to prove the effect of the mentioned Active power filter.

Xiaoling, Guo

2013-01-01

208

Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices

NASA Astrophysics Data System (ADS)

The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.

Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.

2013-12-01

209

Design of wavelet-based ECG detector for implantable cardiac pacemakers.

A wavelet Electrocardiogram (ECG) detector for low-power implantable cardiac pacemakers is presented in this paper. The proposed wavelet-based ECG detector consists of a wavelet decomposer with wavelet filter banks, a QRS complex detector of hypothesis testing with wavelet-demodulated ECG signals, and a noise detector with zero-crossing points. In order to achieve high detection accuracy with low power consumption, a multi-scaled product algorithm and soft-threshold algorithm are efficiently exploited in our ECG detector implementation. Our algorithmic and architectural level approaches have been implemented and fabricated in a standard 0.35 ?m CMOS technology. The testchip including a low-power analog-to-digital converter (ADC) shows a low detection error-rate of 0.196% and low power consumption of 19.02 ?W with a 3 V supply voltage. PMID:23893202

Min, Young-Jae; Kim, Hoon-Ki; Kang, Yu-Ri; Kim, Gil-Su; Park, Jongsun; Kim, Soo-Won

2013-08-01

210

Wavelet-based decomposition and analysis of structural patterns in astronomical images

Context. Images of spatially resolved astrophysical objects contain a wealth of morphological and dynamical information, and effective extraction of this information is of paramount importance for understanding the physics and evolution of these objects. Algorithms and methods employed presently for this purpose (such as, for instance, Gaussian model fitting) often use simplified approaches for describing the structure of resolved objects. Aims. Automated (unsupervised) methods for structure decomposition and tracking of structural patterns are needed for this purpose, in order to be able to deal with the complexity of structure and large amount of data involved. Methods. A new Wavelet-based Image Segmentation and Evaluation (WISE) method is developed for multiscale decomposition, segmentation, and tracking of structural patterns in astronomical images. Results. The method is tested against simulated images of relativistic jets and applied to data from long-term monitoring of parsec- scale radio jets in 3C 27...

Mertens, Florent

2014-01-01

211

A new algorithm for wavelet-based heart rate variability analysis

One of the most promising non-invasive markers of the activity of the autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits often provide spectral analysis techniques using the Fourier transform, which assumes that the heart rate series is stationary. To overcome this issue, the Short Time Fourier Transform is often used (STFT). However, the wavelet transform is thought to be a more suitable tool for analyzing non-stationary signals than the STFT. Given the lack of support for wavelet-based analysis in HRV toolkits, such analysis must be implemented by the researcher. This has made this technique underutilized. This paper presents a new algorithm to perform HRV power spectrum analysis based on the Maximal Overlap Discrete Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any spectral band with a given tolerance for the band's boundaries. The MODWPT decomposition tree is pruned to avoid calculating unnecessary wavelet coefficients, thereby optimizing execution t...

García, Constantino A; Vila, Xosé; Márquez, David G

2014-01-01

212

National Technical Information Service (NTIS)

The DARPA project, 'Develop and Demonstrate Real-Time Wavelet Based Automatic Target Recognition Using Sonar and Synthetic Aperture Radar (SAR) data' was initiated March 30,1998 with a kick-off meeting attended by personnel from Rice University CML, North...

P. Haley

2001-01-01

213

Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1–7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants. PMID:20156988

Malloy, Elizabeth J.; Morris, Jeffrey S.; Adar, Sara D.; Suh, Helen; Gold, Diane R.; Coull, Brent A.

2010-01-01

214

ESTIMATION OF THE JUMP SIZE DENSITY IN A MIXED COMPOUND POISSON PROCESS.

ESTIMATION OF THE JUMP SIZE DENSITY IN A MIXED COMPOUND POISSON PROCESS. F. COMTE1 , C. DUVAL1 , V compound Poisson process. Nonparametric density estimation. Penalization method. AMS Classification. 62M09Â62G07 1. Introduction Compound Poisson processes are commonly used in many fields of applications

Paris-Sud XI, UniversitÃ© de

215

PROBABILITY DENSITY ESTIMATION IN HIGHER DIMENSIONS David W. Scott and James R. Thompson

to be employed should also be data deterÂ mined. Using such a philosophy, density estimation has been a good job of estimating the density function and its derivatives if only the samÂ ple size mechanisms are, in fact, somewhat nonstationÂ ary even over the tim

Scott, David W.

216

Characterization of a maximum-likelihood nonparametric density estimator of kernel type

NASA Technical Reports Server (NTRS)

Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

Geman, S.; Mcclure, D. E.

1982-01-01

217

Estimating option implied risk-neutral densities using spline and hypergeometric functions

Summary We examine the ability of two recent methods – the smoothed implied volatility smile method (SML) and the density functionals based on confluent hypergeometric functions (DFCH) – for estimating implied risk-neutral densities (RNDs) from European-style options. Two complementary Monte Carlo experiments are conducted and the performance of the two RND estimators is evaluated by the root mean integrated squared

RUIJUN BUAND; Kaddour Hadri

2007-01-01

218

Probability density function estimation for video in the DCT domain

NASA Astrophysics Data System (ADS)

Regardless the final targeted application (compression, watermarking, texture analysis, indexation, ...), image/video modelling in the DCT domain is generally approached by tests of concordance with some well known pdfs (like Gaussian, generalised Gaussian, Laplace, Rayleigh ...). Instead of forcing the images/videos to stick to such theoretical models, our study aims at estimating the true pdf characterising their behaviour. In this respect, we considered three intensively used ways of applying DCT, namely on whole frames, on 4x4 blocks, and on 8x8 blocks. In each case, we first prove that a law modelling the corresponding coefficients exists. Then, we estimate this law by Gaussian mixtures and finally we identify the generality of such model with respect to the data on which it was computed and to the estimation method it relies on.

Dumitru, O.; Mitrea, M.; Prêteux, F.; Pathak, A.

2008-02-01

219

Density estimation for fatty acids and vegetable oils based on their fatty acid composition

The liquid density of fatty acids can be accurately estimated by the modified Rackett equation over a wide range of temperatures.\\u000a The modified Rackett equation requires the critical properties and an empirical parameter,Z\\u000a \\u000a RA\\u000a , for each acid as the basis for computing density as a function of temperature. The liquid density of vegetable oils can\\u000a be estimated by using

J. D. Halvorsen; L. D. Clements

1993-01-01

220

Estimates of cetacean abundance, biomass, and population density are

be affected by anthropogenic sound (e.g., sonar, ship noise, and seismic surveys) and cli- mate change., 1997). Large whales also die from ship strikes (Carretta et al., 2006). West coast cetaceans may of cetaceans along the U.S. west coast were estimated from ship surveys conducted in the summer and fall

221

ESTIMATES OF POPULATION DENSITY AND GROWTH OF BLACK BEARS IN THE SMOKY MOUNTAINS

To estimate population abundance, data were collected from 1,239 black bears (Ursus americanus) trapped in 3 areas of the Smoky Mountains (SM), 1972-89. Bears were tagged, tattooed, and released, and using the Jolly-Seber open population model, density estimates ranged from 0.09 to 0.35 bears\\/km2. Year-to-year density estimates and the observed rate of growth (0-2%) indicated a stable to slightly increasing

PETER K. MCLEAN; MICHAEL R. PELTON

222

Unbiased estimators of wildlife population densities using aural information

Peters' Data . 3. 4 Suggestions for Improvements DISCUSSION AND CONCLUSIONS 18 20 22 25 28 BIBLIOGRAPHY. VITA. . . . ~ 4. 1 Situation I. 4. 2 Situation II 4. 3 Situation III. . ~ . . ~ ~ - . ~ 4. 4 Combining Aural and Visual Information 4. 5... with much promise among people in wildlife when compared with other methods of estimation. Peters [15] found that the call-count index of mourning doves seems to be sub)ect to less variation than road count data. The Southeastern Association [18] found...

Durland, Eric Newton

2012-06-07

223

Daytime fog detection and density estimation with entropy minimization

NASA Astrophysics Data System (ADS)

Fog disturbs the proper image processing in many outdoor observation tools. For instance, fog reduces the visibility of obstacles in vehicle driving applications. Usually, the estimation of the amount of fog in the scene image allows to greatly improve the image processing, and thus to better perform the observation task. One possibility is to restore the visibility of the contrasts in the image from the foggy scene image before applying the usual image processing. Several algorithms were proposed in the recent years for defogging. Before to apply the defogging, it is necessary to detect the presence of fog, not to emphasis the contrasts due to noise. Surprisingly, few a reduced number of image processing algorithms were proposed for fog detection and characterization. Most are dedicated to static cameras and can not be used when the camera is moving. Daytime fog is characterized by its extinction coefficient, which is equivalent to the visibility distance. A visibility-meter can be used for fog detection and characterization, but this kind of sensor performs an estimation in a relatively small volume of air, and is thus sensitive to heterogeneous fog, and air turbulence with moving cameras. In this paper, we propose an original algorithm, based on entropy minimization, to detect fog and estimate its extinction coefficient by the processing of stereo pairs. This algorithm is fast, provides accurate results using low cost stereo camera sensor and, the more important, can work when the cameras are moving. The proposed algorithm is evaluated on synthetic and camera images with ground truth. Results show that the proposed method is accurate, and, combined with a fast stereo reconstruction algorithm, should provide a solution, close to real time, for fog detection and visibility estimation for moving sensors.

Caraffa, L.; Tarel, J. P.

2014-08-01

224

Estimation of a k-monotone density: characterizations, consistency and minimax lower bounds.

The classes of monotone or convex (and necessarily monotone) densities on ?(+) can be viewed as special cases of the classes of k-monotone densities on ?(+). These classes bridge the gap between the classes of monotone (1-monotone) and convex decreasing (2-monotone) densities for which asymptotic results are known, and the class of completely monotone (?-monotone) densities on ?(+). In this paper we consider non-parametric maximum likelihood and least squares estimators of a k-monotone density g(0).We prove existence of the estimators and give characterizations. We also establish consistency properties, and show that the estimators are splines of degree k - 1 with simple knots. We further provide asymptotic minimax risk lower bounds for estimating the derivatives[Formula: see text], at a fixed point x(0) under the assumption that [Formula: see text]. PMID:20436949

Balabdaoui, Fadoua; Wellner, Jon A

2010-02-01

225

Probabilistic Analysis and Density Parameter Estimation Within Nessus

NASA Technical Reports Server (NTRS)

This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

2002-01-01

226

Improving Density Estimation by Incorporating Spatial Information Laura M. Smith Matthew S. Keegan Angeles November 30, 2009 Abstract Given discrete event data, we wish to produce a probability density of density estimation, such as Kernel Density Estimation, do not incorporate geographical information. Using

Soatto, Stefano

227

A comparison of 2 techniques for estimating deer density

We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

Storm, G.L.; Cottam, D.F.; Yahner, R.H.; Nichols, J.D.

1977-01-01

228

Density Ratio Estimation: A Comprehensive Review Masashi Sugiyama, Tokyo Institute of Technology Kanamori, Nagoya University (kanamori@is.nagoya-u.ac.jp) Abstract Density ratio estimation has attracted inference, and conditional probability estimation. When estimating the density ratio, it is preferable

Sugiyama, Masashi

229

The estimation of the gradient of a density function, with applications in pattern recognition

Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in ak-nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality

KEINOSUKE FUKUNAGA; LARRY D. HOSTETLER

1975-01-01

230

A simple alternative to line transects of nests for estimating orangutan densities.

We conducted a validation of the line transect technique to estimate densities of orangutan (Pongo pygmaeus) nests in a Bornean swamp forest, and compared these results with density estimates based on nest counts in plots and on female home ranges. First, we examined the accuracy of the line transect method. We found that the densities based on a pass in both directions of two experienced pairs of observers was 27% below a combined sample based on transect walks by eight pairs of observers, suggesting that regular line-transect densities may seriously underestimate true densities. Second, we compared these results with those obtained by nest counts in 0.2-ha plots. This method produced an estimated 15.24 nests/ha, as compared to 10.0 and 10.9, respectively, by two experienced pairs of observers who walked a line transect in both directions. Third, we estimated orangutan densities based on female home range size and overlap and the proportion of females in the population, which produced a density of 4.25-4.5 individuals/km(2) . Converting nest densities into orangutan densities, using locally estimated parameters for nest production rate and proportion of nest builders in the population, we found that density estimates based on the line transect results of the most experienced pairs on a double pass were 2.82 and 3.08 orangutans/km(2), based on the combined line transect data are 4.04, and based on plot counts are 4.30. In this swamp forest, plot counts therefore give more accurate estimates than do line transects. We recommend that this new method be evaluated in other forest types as well. PMID:15983724

van Schaik, Carel P; Wich, Serge A; Utami, Sri Suci; Odom, Kisar

2005-10-01

231

How Bandwidth Selection Algorithms Impact Exploratory Data Analysis Using Kernel Density Estimation

Exploratory data analysis (EDA) is important, yet often overlooked in the social and behavioral sciences. Graphical analysis of one's data is central to EDA. A viable method of estimating and graphing the underlying density ...

Harpole, Jared Kenneth

2013-05-31

232

Density Estimation with Confidence Sets Exemplified by Superclusters and Voids in the Galaxies

A method is presented for forming both a point estimate and a confidence set of semiparametric densities. The final product is a three-dimensional figure that displays a selection of density estimates for a plausible range of smoothing parameters. The boundaries of the smoothing parameter are determined by a nonparametric goodness-of-fit test that is based on the sample spacings. For each

Kathryn Roeder

1990-01-01

233

Density meter algorithm and system for estimating sampling/mixing uncertainty

The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses.

Shine, E.P.

1986-01-01

234

With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as Point-of-Care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by Body Sensor Networks (BSNs) from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level etc. and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data is being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the Percentage Residual Difference (PRD) and the Wavelet Weighted PRD (WWPRD). It is found that the proposed technique provides high security protection for patients data with low (less than 1% ) distortion and ECG data remains diagnosable after watermarking (i.e. hiding patient confidential data) and as well as after watermarks (i.e. hidden data) are removed from the watermarked data. PMID:23708767

Ibaida, Ayman; Khalil, Ibrahim

2013-05-21

235

Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

2014-01-01

236

The FlexWave-ll: a wavelet-based compression engine

NASA Astrophysics Data System (ADS)

The FlexWave-II has been developed as a dedicated image compression component for spaceborn applica- tions, enabling a multitude of application scenarios, including lossless and lossy compression. The Flex-Wave-II provides scalable compression, allowing gradual enhancement or degradation of the image quality in a programmable way. A wavelet-based compression scheme has been selected because of the intrinsic scalable characteristics. Moreover the compression criteria can be tuned separately for optimal measurement and visual data compression. The FlexWave-II provides full scalability features and high processing performance. It supports push-broom image processing. The wavelet transform engine is ca- pable of computing up to 5 levels of wavelet transform with 5/3-, 9/3- or 9/7-tap wavelet filters, for image sizes as large as 1k×1k pixels. On an FPGA implementation, clocked on 41 MHz, a processing performance of up to 10 Mpixels/second was measured. The wavelet com- pression engine allows two compression modes: a fixed compression ratio mode optimised for user-defined cri- teria and a fixed quantisation mode with user defined quantisation tables.

Vanhoof, B.; Chirila-Rus, A.; Masschelein, B.; Osorio, R.

2002-12-01

237

Revisiting multifractality of high resolution temporal rainfall using a wavelet-based formalism

NASA Astrophysics Data System (ADS)

We re-examine the scaling structure of temporal rainfall using wavelet-based methodologies which offer important advantages compared to the more traditional multifractal approaches such as box counting and structure function techniques. In particular, we explore two methods based on the Continuous Wavelet Transform (CWT) and the Wavelet Transform Modulus Maxima (WTMM): the partition function method and the newer and more efficient magnitude cumulant analysis method. We also explore a two-point magnitude correlation analysis which is able to infer the presence or absence of multiplicativity as the underlying mechanism of scaling. The diagnostic power of these methodologies for small samples, signals with short ranges of scaling, and signals for which high frequency fluctuations are superimposed on a low-frequency component (all common attributes of geophysical signals) is carefully documented. Application of these methodologies to several midwestern convective storms sampled every 5 seconds over several hours provide new insights. They reveal the presence of a very intermittent multifractal structure (a wide spectrum of singularities) in rainfall fluctuations between the scales of 5 minutes and the storm pulse duration of 1-2 hours. The two-point magnitude statistical analysis suggests that this structure is associated with a local multiplicative cascading mechanism which applies only within storm pulses but not over the whole storm duration.

Foufoula-Georgiou, E.; Venugopal, V.; Roux, S. G.; Arneodo, A.

2005-12-01

238

Revisiting multifractality of high-resolution temporal rainfall using a wavelet-based formalism

NASA Astrophysics Data System (ADS)

We reexamine the scaling structure of temporal rainfall using wavelet-based methodologies which, as we demonstrate, offer important advantages compared to the more traditional multifractal approaches such as box counting and structure function techniques. In particular, we explore two methods based on the Continuous Wavelet Transform (CWT) and the Wavelet Transform Modulus Maxima (WTMM): the partition function method and the newer and more efficient magnitude cumulant analysis method. We also report the results of a two-point magnitude correlation analysis which is able to infer the presence or absence of multiplicativity as the underlying mechanism of scaling. The diagnostic power of these methodologies for small samples, signals with short ranges of scaling, and signals for which high-frequency fluctuations are superimposed on a low-frequency component (all common attributes of geophysical signals) is carefully documented. Application of these methodologies to several midwestern convective storms sampled every 5 s over several hours provides new insights. They reveal the presence of a very intermittent multifractal structure (a wide spectrum of singularities) in rainfall fluctuations between the scales of 5 min and the storm pulse duration (of the order of 1-2 hours for the analyzed storms). The two-point magnitude statistical analysis suggests that this structure is consistent with a multiplicative cascading mechanism which however is local in nature; that is, it applies only within each storm pulse but not over the whole storm duration.

Venugopal, V.; Roux, StéPhane G.; Foufoula-Georgiou, Efi; Arneodo, Alain

2006-06-01

239

Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044

Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan

2014-05-01

240

A wavelet-based damage detection algorithm based on bridge acceleration response to a vehicle

NASA Astrophysics Data System (ADS)

Previous research based on theoretical simulations has shown the potential of the wavelet transform to detect damage in a beam by analysing the time-deflection response due to a constant moving load. However, its application to identify damage from the response of a bridge to a vehicle raises a number of questions. Firstly, it may be difficult to record the difference in the deflection signal between a healthy and a slightly damaged structure to the required level of accuracy and high scanning frequencies in the field. Secondly, the bridge is going to have a road profile and it will be loaded by a sprung vehicle and time-varying forces rather than a constant load. Therefore, an algorithm based on a plot of wavelet coefficients versus time to detect damage (a singularity in the plot) appears to be very sensitive to noise. This paper addresses these questions by: (a) using the acceleration signal, instead of the deflection signal, (b) employing a vehicle-bridge finite element interaction model, and (c) developing a novel wavelet-based approach using wavelet energy content at each bridge section, which proves to be more sensitive to damage than a wavelet coefficient line plot at a given scale as employed by others.

Hester, D.; González, A.

2012-04-01

241

A wavelet-based image quality metric for the assessment of 3D synthesized views

NASA Astrophysics Data System (ADS)

In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.

Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick

2013-03-01

242

NASA Astrophysics Data System (ADS)

In this paper, elastic wave propagation is studied in a nanocomposite reinforced with multiwall carbon nanotubes (CNTs). Analysis is performed on a representative volume element of square cross section. The frequency content of the exciting signal is at the terahertz level. Here, the composite is modeled as a higher order shear deformable beam using layerwise theory, to account for partial shear stress transfer between the CNTs and the matrix. The walls of the multiwall CNTs are considered to be connected throughout their length by distributed springs, whose stiffness is governed by the van der Waals force acting between the walls of nanotubes. The analyses in both the frequency and time domains are done using the wavelet-based spectral finite element method (WSFEM). The method uses the Daubechies wavelet basis approximation in time to reduce the governing PDE to a set of ODEs. These transformed ODEs are solved using a finite element (FE) technique by deriving an exact interpolating function in the transformed domain to obtain the exact dynamic stiffness matrix. Numerical analyses are performed to study the spectrum and dispersion relations for different matrix materials and also for different beam models. The effects of partial shear stress transfer between CNTs and matrix on the frequency response function (FRF) and the time response due to broadband impulse loading are investigated for different matrix materials. The simultaneous existence of four coupled propagating modes in a double-walled CNT-composite is also captured using modulated sinusoidal excitation.

Mitra, Mira; Gopalakrishnan, S.

2006-02-01

243

Application of wavelet-based neural network on DNA microarray data

The advantage of using DNA microarray data when investigating human cancer gene expressions is its ability to generate enormous amount of information from a single assay in order to speed up the scientific evaluation process. The number of variables from the gene expression data coupled with comparably much less number of samples creates new challenges to scientists and statisticians. In particular, the problems include enormous degree of collinearity among genes expressions, likely violation of model assumptions as well as high level of noise with potential outliers. To deal with these problems, we propose a block wavelet shrinkage principal component (BWSPCA) analysis method to optimize the information during the noise reduction process. This paper firstly uses the National Cancer Institute database (NC160) as an illustration and shows a significant improvement in dimension reduction. Secondly we combine BWSPCA with an artificial neural network-based gene minimization strategy to establish a Block Wavelet-based Neural Network model in a robust and accurate cancer classification process (BWNN). Our extensive experiments on six public cancer datasets have shown that the method of BWNN for tumor classification performed well, especially on some difficult instances with large-class (more than two) expression data. This proposed method is extremely useful for data denoising and is competitiveness with respect to other methods such as BagBoost, RandomForest (RanFor), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN). PMID:19255638

Lee, Jack; Zee, Benny

2008-01-01

244

Wavelet-based double-difference seismic tomography with sparsity regularization

NASA Astrophysics Data System (ADS)

We have developed a wavelet-based double-difference (DD) seismic tomography method. Instead of solving for the velocity model itself, the new method inverts for its wavelet coefficients in the wavelet domain. This method takes advantage of the multiscale property of the wavelet representation and solves the model at different scales. A sparsity constraint is applied to the inversion system to make the set of wavelet coefficients of the velocity model sparse. This considers the fact that the background velocity variation is generally smooth and the inversion proceeds in a multiscale way with larger scale features resolved first and finer scale features resolved later, which naturally leads to the sparsity of the wavelet coefficients of the model. The method is both data- and model-adaptive because wavelet coefficients are non-zero in the regions where the model changes abruptly when they are well sampled by ray paths and the model is resolved from coarser to finer scales. An iteratively reweighted least squares procedure is adopted to solve the inversion system with the sparsity regularization. A synthetic test for an idealized fault zone model shows that the new method can better resolve the discontinuous boundaries of the fault zone and the velocity values are also better recovered compared to the original DD tomography method that uses the first-order Tikhonov regularization.

Fang, Hongjian; Zhang, Haijiang

2014-11-01

245

A new approach to pre-processing digital image for wavelet-based watermark

NASA Astrophysics Data System (ADS)

The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

Agreste, Santa; Andaloro, Guido

2008-11-01

246

A Branch and Bound Algorithm for Finding the Modes in Kernel Density Estimates

Kernel density estimators are established tools in non-parametric statis- tics. Due to their exibility and ease of use, these methods are popular in computer vision and pattern recognition for tasks such as object tracking in video or image segmentation. The most frequently used algorithm for nding the modes in such densities (the mean shift) is a gradient ascent rule, which

Oliver Wirjadi; Thomas M. Breuel

2009-01-01

247

The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition

The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition Thomas K scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb-calorimetry). These proximate data were subsequently converted to energy densities. The two techniques (bomb- calorimetry

Hays, Graeme

248

Estimating low-density snowshoe hare populations using fecal pellet counts

distribution. RÃ©sumÃ© : On peut Ã©valuer les populations de liÃ¨vres d'AmÃ©rique (Lepus americanus) de forte D. Roth, Ethan Ellsworth, Aaron J. Wirsing, and Todd D. Steury Abstract: Snowshoe hare (Lepus americanus) populations found at high densities can be estimated using fecal pellet densities on rectangular

249

Autocorrelation-based estimate of particle image density for diffraction limited particle images

NASA Astrophysics Data System (ADS)

In particle image velocimetry (PIV), the number of particle images per interrogation region, or particle image density, impacts the strength of the correlation and, as a result, the number of valid vectors and the measurement uncertainty. For some uncertainty methods, an a priori estimate of the uncertainty of PIV requires knowledge of the particle image density. An autocorrelation-based method for estimating the local, instantaneous, particle image density is presented. The method assumes that the particle images are diffraction limited and thus Gaussian in shape. Synthetic images are used to develop an empirical relationship between the autocorrelation peak magnitude and the particle image density, particle image diameter, particle image intensity, and interrogation region size. This relationship is tested using experimental images. The experimental results are compared to particle image densities obtained through implementing a local maximum method and are found to be more robust. The effect of varying particle image intensities was also investigated and is found to affect the measurement of the particle image density. Knowledge of the particle image density in PIV facilitates uncertainty estimation, and can alert the user that particle image density is too low or too high, even if these conditions are intermittent. This information can be used as a new vector validation criterion for PIV processing. In addition, use of this method is not limited to PIV, but it can be used to determine the density of any image with diffraction limited particle images.

Warner, Scott O.; Smith, Barton L.

2014-06-01

250

Frequency domain analysis by power spectrum density (PSD) estimate has proven to be an effective method of investigation for studying the influence of the automatic nervous system on the systemic and coronary hemodynamics. Since the problem is intrinsically multichannel it should be studied by some proper multichannel PSD estimates. Parametric autoregressive spectral methods were used and in particular the Nuttal-Strand

A. Macerata; M. Fusilli; F. Conforti; M. Niccolai; H. Emdin; M. G. Trivella; C. Marchesi

1993-01-01

251

Item Response Theory with Estimation of the Latent Density Using Davidian Curves

ERIC Educational Resources Information Center

Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

Woods, Carol M.; Lin, Nan

2009-01-01

252

Cluster Kernels: Resource-Aware Kernel Density Estimators over Streaming Data

A variety of real-world applications heavily relies on an adequate analysis of transient data streams. Due to the rigid processing requirements of data streams, common analysis techniques as known from data mining are not directly applicable. A fundamental building block of many data mining and analysis approaches is density estimation. It provides a well-defined estimation of a continuous data distribution,

Christoph Heinz; Bernhard Seeger

2008-01-01

253

Nonparametric maximum likelihood estimation of probability densities by penalty function methods

NASA Technical Reports Server (NTRS)

When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

1974-01-01

254

Reliable estimation of the column density in Smoothed Particle Hydrodynamic simulations

We describe a simple method for estimating the vertical column density in Smoothed Particle Hydrodynamics (SPH) simulations of discs. As in the method of Stamatellos et al. (2007), the column density is estimated using pre-computed local quantities and is then used to estimate the radiative cooling rate. The cooling rate is a quantity of considerable importance, for example, in assessing the probability of disc fragmentation. Our method has three steps: (i) the column density from the particle to the mid plane is estimated using the vertical component of the gravitational acceleration, (ii) the "total surface density" from the mid plane to the surface of the disc is calculated, (iii) the column density from each particle to the surface is calculated from the difference between (i) and (ii). This method is shown to greatly improve the accuracy of column density estimates in disc geometry compared with the method of Stamatellos. On the other hand, although the accuracy of our method is still acceptable in the c...

Young, Matthew D; Moeckel, Nick; Clarke, Cathie J

2012-01-01

255

Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

2013-01-01

256

Cetacean population density estimation from single fixed sensors using passive acoustics.

Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. PMID:21682386

Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

2011-06-01

257

Estimation of mechanical properties of panels based on modal density and mean mobility measurements

NASA Astrophysics Data System (ADS)

The mechanical characteristics of wood panels used by instrument makers are related to numerous factors, including the nature of the wood or characteristic of the wood sample (direction of fibers, micro-structure nature). This leads to variations in Young's modulus, the mass density, and the damping coefficients. Existing methods for estimating these parameters are not suitable for instrument makers, mainly because of the need of expensive experimental setups, or complicated protocols, which are not adapted to a daily practice in a workshop. In this paper, a method for estimating Young's modulus, the mass density, and the modal loss factors of flat panels, requiring a few measurement points and an affordable experimental setup, is presented. It is based on the estimation of two characteristic quantities: the modal density and the mean mobility. The modal density is computed from the values of the modal frequencies estimated by the subspace method ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques), associated with the signal enumeration technique ESTER (ESTimation of ERror). This modal identification technique is proved to be robust in the low- and the mid-frequency domains, i.e. when the modal overlap factor does not exceed 1. The estimation of the modal parameters also enables the computation of the modal loss factor in the low- and the mid-frequency domains. An experimental fit with the theoretical expressions for the modal density and the mean mobility enables an accurate estimation of Young's modulus and the mass density of flat panels. A numerical and an experimental study show that the method is robust, and that it requires solely a few measurement points.

Elie, Benjamin; Gautier, François; David, Bertrand

2013-11-01

258

Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities with the bed heights. PMID:24804024

Haque, Ekramul

2013-01-01

259

An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing

A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks.

Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.

2014-01-01

260

Background Copy number aberrations (CNAs) are an important molecular signature in cancer initiation, development, and progression. However, these aberrations span a wide range of chromosomes, making it hard to distinguish cancer related genes from other genes that are not closely related to cancer but are located in broadly aberrant regions. With the current availability of high-resolution data sets such as single nucleotide polymorphism (SNP) microarrays, it has become an important issue to develop a computational method to detect driving genes related to cancer development located in the focal regions of CNAs. Results In this study, we introduce a novel method referred to as the wavelet-based identification of focal genomic aberrations (WIFA). The use of the wavelet analysis, because it is a multi-resolution approach, makes it possible to effectively identify focal genomic aberrations in broadly aberrant regions. The proposed method integrates multiple cancer samples so that it enables the detection of the consistent aberrations across multiple samples. We then apply this method to glioblastoma multiforme and lung cancer data sets from the SNP microarray platform. Through this process, we confirm the ability to detect previously known cancer related genes from both cancer types with high accuracy. Also, the application of this approach to a lung cancer data set identifies focal amplification regions that contain known oncogenes, though these regions are not reported using a recent CNAs detecting algorithm GISTIC: SMAD7 (chr18q21.1) and FGF10 (chr5p12). Conclusions Our results suggest that WIFA can be used to reveal cancer related genes in various cancer data sets. PMID:21569311

2011-01-01

261

A wavelet-based neural model to optimize and read out a temporal population code

It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned wavelet-based decoders. PMID:22563314

Luvizotto, Andre; Renno-Costa, Cesar; Verschure, Paul F. M. J.

2012-01-01

262

Wavelet-based features for characterizing ventricular arrhythmias in optimizing treatment options.

Ventricular arrhythmias arise from abnormal electrical activity of the lower chambers (ventricles) of the heart. Ventricular tachycardia (VT) and ventricular fibrillation (VF) are the two major subclasses of ventricular arrhythmias. While VT has treatment options that can be performed in catheterization labs, VF is a lethal cardiac arrhythmia, often when detected the patient receives an implantable defibrillator which restores the normal heart rhythm by the application of electric shocks whenever VF is detected. The classification of these two subclasses are important in making a decision on the therapy performed. As in the case of all real world process the boundary between VT and VF is ill defined which might lead to many of the patients experiencing arrhythmias in the overlap zone (that might be predominately VT) to receive shocks by the an implantable defibrillator. There may also be a small population of patients who could be treated with anti-arrhythmic drugs or catheterization procedure if they can be diagnosed to suffer from predominately VT after objectively analyzing their intracardiac electrogram data obtained from implantable defibrillator. The proposed work attempts to arrive at a quantifiable way to scale the ventricular arrhythmias into VT, VF, and the overlap zone arrhythmias as VT-VF candidates using features extracted from the wavelet analysis of surface electrograms. This might eventually lead to an objective way of analyzing arrhythmias in the overlap zone and computing their degree of affinity towards VT or VF. A database of 24 human ventricular arrhythmia tracings obtained from the MIT-BIH arrhythmia database was analyzed and wavelet-based features that demonstrated discrimination between the VT, VF, and VT-VF groups were extracted. An overall accuracy of 75% in classifying the ventricular arrhythmias into 3 groups was achieved. PMID:22254473

Balasundaram, K; Masse, S; Nair, K; Farid, T; Nanthakumar, K; Umapathy, K

2011-01-01

263

Effect of compression paddle tilt correction on volumetric breast density estimation

NASA Astrophysics Data System (ADS)

For the acquisition of a mammogram, a breast is compressed between a compression paddle and a support table. When compression is applied with a flexible compression paddle, the upper plate may be tilted, which results in variation in breast thickness from the chest wall to the breast margin. Paddle tilt has been recognized as a major problem in volumetric breast density estimation methods. In previous work, we developed a fully automatic method to correct the image for the effect of compression paddle tilt. In this study, we investigated in three experiments the effect of paddle tilt and its correction on volumetric breast density estimation. Results showed that paddle tilt considerably affected accuracy of volumetric breast density estimation, but that effect could be reduced by tilt correction. By applying tilt correction, a significant increase in correspondence between mammographic density estimates and measurements on MRI was established. We argue that in volumetric breast density estimation, tilt correction is both feasible and essential when mammographic images are acquired with a flexible compression paddle.

Kallenberg, Michiel G. J.; van Gils, Carla H.; Lokate, Mariëtte; den Heeten, Gerard J.; Karssemeijer, Nico

2012-08-01

264

We used electrophysiological signals recorded by CMOS Micro Electrode Arrays (MEAs) at high spatial resolution to estimate the functional-effective connectivity of sparse hippocampal neuronal networks in vitro by applying a cross-correlation (CC) based method and ad hoc developed spatio-temporal filtering. Low-density cultures were recorded by a recently introduced CMOS-MEA device providing simultaneous multi-site acquisition at high-spatial (21 ?m inter-electrode separation) as well as high-temporal resolution (8 kHz per channel). The method is applied to estimate functional connections in different cultures and it is refined by applying spatio-temporal filters that allow pruning of those functional connections not compatible with signal propagation. This approach permits to discriminate between possible causal influence and spurious co-activation, and to obtain detailed maps down to cellular resolution. Further, a thorough analysis of the links strength and time delays (i.e., amplitude and peak position of the CC function) allows characterizing the inferred interconnected networks and supports a possible discrimination of fast mono-synaptic propagations, and slow poly-synaptic pathways. By focusing on specific regions of interest we could observe and analyze microcircuits involving connections among a few cells. Finally, the use of the high-density MEA with low density cultures analyzed with the proposed approach enables to compare the inferred effective links with the network structure obtained by staining procedures. PMID:22516778

Maccione, Alessandro; Garofalo, Matteo; Nieus, Thierry; Tedesco, Mariateresa; Berdondini, Luca; Martinoia, Sergio

2012-06-15

265

Estimating detection and density of the Andean cat in the high Andes

The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

Reppucci, J.; Gardner, B.; Lucherini, M.

2011-01-01

266

Measures of association for bivariate interval censored data have not yet been studied extensively. Betensky and Finkelstein (Statist. Med. 1999; 18:3101-3109) proposed to calculate Kendall's coefficient of concordance using a multiple imputation technique, but their method becomes computer intensive for moderate to large data sets. We suggest a different approach consisting of two steps. Firstly, a bivariate smooth estimate of the density of log-event times is determined. The smoothing technique is based on a mixture of Gaussian densities fixed on a grid with weights determined by a penalized likelihood approach. Secondly, given the smooth approximation several local and global measures of association can be estimated readily. The performance of our method is illustrated by an extensive simulation study and is applied to tooth emergence data of permanent teeth measured on 4468 children from the Signal-Tandmobiel study. PMID:18623606

Bogaerts, Kris; Lesaffre, Emmanuel

2008-12-10

267

Wavelet-based SAR images despeckling using joint hidden Markov model

NASA Astrophysics Data System (ADS)

In the past few years, wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the deficiency for taking account of intrascale correlations that exist among neighboring wavelet coefficients. In this paper, we propose to develop a joint hidden Markov model by fusing the wavelet Bayesian denoising technique with an image regularization procedure based on HMT and Markov random field (MRF). The Expectation Maximization algorithm is used to estimate hyperparameters and specify the mixture model. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. It is shown that the joint method outperforms lee filter and standard HMT techniques in terms of the integrative measure of the equivalent number of looks (ENL) and Pratt's figure of merit(FOM), especially when dealing with speckle noise in large variance.

Li, Qiaoliang; Wang, Guoyou; Liu, Jianguo; Chen, Shaobo

2007-11-01

268

REVISION OF RELATIVE DENSITY AND ESTIMATION OF LIQUEFACTION STRENGTH OF SANDY SOIL WITH FINE CONTENT

NASA Astrophysics Data System (ADS)

It is generally known that liquefaction strength obtained from undrained cyclic triaxial test is influenced by various factors such as relative densities, fine content, grain size distributions and plasticity indexes. However, It is difficult to estimate liquefaction strength for various soil types from same physical properties. In order to estimate the liquefaction strength of various soil types such as silt, silty sands and clean sands, this study showed a method to revice relative density of sandy soil including more than 15% of fine content and the correlation between reviced relative density and void ratio ranges obtaind from maximum and minimum void ratio. Then, the relationships between void ratio ranges and liquefaction strengths from other studies was considered. As a result, the defference of liquefaction strength between reconstituted and undisturbed samples was recognized from the correlations revised relative density using void ratio ranges and fine content.

Nakazawa, Hiroshi; Haradah, Kenji

269

Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

2014-01-01

270

Trap array configuration influences estimates and precision of black bear density and abundance.

Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557

Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L

2014-01-01

271

The accurate quantitation of high density lipo- proteins has recently assumed greater importance in view of studies suggesting their negative correlation with coronary heart disease. High density lipoproteins may be estimated by measuring cholesterol in the plasma frac- tion of d > 1.063 g\\/ml. A more practical approach is the specific precipitation of apolipoprotein B (apoB)-contain- ing lipoproteins by sulfated

G. Russell Warnick; John J. Albers

272

Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250

Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès

2014-06-01

273

Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

NASA Astrophysics Data System (ADS)

The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

274

A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder

NASA Astrophysics Data System (ADS)

Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.

Schroth-Miller, Madeline L.

275

A hierarchical model for estimating density in camera-trap studies

1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

Royle, J.A.; Nichols, J.D.; Karanth, K. U.; Gopalaswamy, A.M.

2009-01-01

276

Hierarchical models for estimating density from DNA mark-recapture studies.

Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps (e.g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS. PMID:19449704

Gardner, Beth; Royle, J Andrew; Wegan, Michael T

2009-04-01

277

Hierarchical models for estimating density from DNA mark-recapture studies

Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

Gardner, B.; Royle, J.A.; Wegan, M.T.

2009-01-01

278

Efficient estimation of power spectral density from laser Doppler anemometer data

A non-biased estimator of power spectral density (PSD) is introduced for data obtained from a zeroth order interpolated laser\\u000a Doppler anemometer (LDA) data set. The systematic error, sometimes referred to as the “particle-rate filter” effect, is removed\\u000a using an FIR filter parameterized using the mean particle rate. Independent from this, a procedure for estimating the measurement\\u000a system noise is introduced

H. Nobach; E. Müller; C. Tropea

1998-01-01

279

Power density spectrum estimation of the random controlled PWM single-phase boost rectifier

Random modulation is of increasing interest in power electronics and also holds promise for reducing filtering requirements in AC\\/AC, DC\\/DC converter applications and reducing acoustic noise in motor drive applications. This paper deals with the power spectrum density (PSD) estimation methods of the random controlled pulse width modulation (RPWM) single-phase structured boost rectifier. Estimated PSD is also experimentally verified. The

F. Mihalic; M. Milanovic

1999-01-01

280

Workplace air is monitored for overall dust levels and for specific components of the dust to determine compliance with occupational and workplace standards established by regulatory bodies for worker health protection. Exposure monitoring studies were conducted by the International Copper Association (ICA) at various industrial facilities around the world working with copper. Individual cascade impactor stages were weighed to determine the total amount of dust collected on the stage, and then the amounts of soluble and insoluble copper and other metals on each stage were determined; speciation was not determined. Filter samples were also collected for scanning electron microscope analysis. Retrospectively, there was an interest in obtaining estimates of alveolar lung burdens of copper in workers engaged in tasks requiring different levels of exertion as reflected by their minute ventilation. However, mechanistic lung dosimetry models estimate alveolar lung burdens based on particle Stoke's diameter. In order to use these dosimetry models the mass-based, aerodynamic diameter distribution (which was measured) had to be transformed into a distribution of Stoke's diameters, requiring an estimation be made of individual particle density. This density value was estimated by using cascade impactor data together with scanning electron microscopy data from filter samples. The developed method was applied to ICA monitoring data sets and then the multiple path particle dosimetry (MPPD) model was used to determine the copper alveolar lung burdens for workers with different functional residual capacities engaged in activities requiring a range of minute ventilation levels. PMID:24304308

Miller, Frederick J; Kaczmar, Swiatoslav W; Danzeisen, Ruth; Moss, Owen R

2013-12-01

281

Estimating food portions. Influence of unit number, meal type and energy density????

Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7–26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled “meal”, 4 “drink” and 16 “snack”. Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p = 0.01). Single unit foods and those labelled as ‘meal’ or ‘beverage’ were estimated with greater error than multi-unit and ‘snack’ foods (p = 0.02 and p < 0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30–46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948

Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A.

2013-01-01

282

NASA Astrophysics Data System (ADS)

Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

2014-03-01

283

NASA Astrophysics Data System (ADS)

Cetin has applied non-quadratic optimization methods to produce feature enhanced high range resolution (HRR) radar profiles. This work concerned ground based targets and was carried out in the temporal domain. In this paper, we propose a wavelet-based-half-quadratic technique for ground-to-air target identification. The method is tested on simulated data generated by standard techniques. This analysis shows the ability of the proposed method to recover high-resolution features such as the locations and amplitudes of the dominant scatterers in the HRR profile. This suggests that the technique potentially may help improve the performance of HRR target recognition systems.

Morris, Hedley C.; DePass, Monica M.

2004-08-01

284

3D depth-to-basement and density contrast estimates using gravity and borehole data

NASA Astrophysics Data System (ADS)

We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.

Barbosa, V. C.; Martins, C. M.; Silva, J. B.

2009-05-01

285

Mixture Kalman Filter Based Highway Congestion Mode and Vehicle Density Estimator and its In today's metropolitan areas, highway traffic conges- tion occurs regularly during rush hours. In addition causes inefficient operation of highways, waste of resources, increased air pollution, and intensified

Horowitz, Roberto

286

A hybrid approach to crowd density estimation using statistical leaning and texture classification

NASA Astrophysics Data System (ADS)

Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.

Li, Yin; Zhou, Bowen

2013-12-01

287

This paper presents a unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision and medical image analysis tasks. Our model extends existing forest-based techniques as it unifies classification, regression, density estimation, manifold learning, semi-supervised learning and active learning under the same decision forest framework. This means that the core implementation

A. Criminisi; J. Shotton; E. Konukoglu

2011-01-01

288

Admittance estimates of mean crustal thickness and density at the Martian hemispheric dichotomy

Francis Nimmo Department of Geological Sciences, University College London, London, UK Received 14 March, Mars Global Surveyor Citation: Nimmo, F., Admittance estimates of mean crustal thickness and density., 1999b]. Both Zuber et al. [2000] and Nimmo and Stevenson [2001] argue that the mean crustal thickness

Nimmo, Francis

289

Brain tumor cell density estimation from multi-modal MR images based on a synthetic

registration [11] and, hence, atlas-based tissue segmentation [12]. Similarly, [13] relies on a bioBrain tumor cell density estimation from multi-modal MR images based on a synthetic tumor growth appearances in multi-modal clinical images, the accurate diagnosis and analysis of these images remains

Prastawa, Marcel

290

Independent Component Analysis of High-Density Electromyography in Muscle Force Estimation

Accurate force prediction from surface electromyography (EMG) forms an important methodological challenge in biomechanics and kinesiology. In a previous study (Staudenmann , 2006), we illustrated force estimates based on analyses lent from multivariate statistics. In particular, we showed the advantages of principal component analysis (PCA) on monopolar high-density EMG (HD-EMG) over conventional electrode configurations. In the present study, we further

Didier Staudenmann; Andreas Daffertshofer; Idsart Kingma; Dick F. Stegeman; Jaap H. van Dieen

2007-01-01

291

COMPARISON OF TRAP-CATCH AND BAIT INTERFERENCE METHODS FOR ESTIMATING POSSUM DENSITIES

Leg-hold trapping is a standard method for estimating possum population densities in New Zealand. However, the method is costly because traps are heavy and bulky and by law are required to be checked daily. This limits the number of sampling lines that can be used, which reduces precision. Traps also threaten rare native ground-birds. We evaluated two lightweight alternatives that

M. D. THOMAS; J. A. BROWN; F. W. MADDIGAN

2003-01-01

292

2 Audubon Mississippi, 1208 Washington Street, Vicksburg, MS 39183 Abstract. We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities

Scott G. Somershoe; Daniel J. Twedt; Bruce Reid

2006-01-01

293

Granularity Adaptive Density Estimation and on Demand Clustering of Concept-Drifting Data Streams

to on demand cluster data streams using their density estimations. A performance study on syn- thetic data sets, the humidity, and the concentrations of oxygen and gas. Each sensor keeps reporting the observed data, and thus- centration, gas concentration)) in a data stream does not make good sense. Instead, if we can characterize

Pei, Jian

294

Use of animal density to estimate manure nutrient recycling ability of Wisconsin dairy farms

Animal density is increasingly being used as an indicator of agricultural nitrogen (N) and phosphorus (P) loss potential in Europe and the US. This study estimated animal-cropland ratios for over 800 Wisconsin dairy farms to: (1) illustrate the impact of alternative definitions of this ratio; (2) evaluate how the definition of ‘cropland’ would affect Wisconsin dairy farmers’ ability to comply

H. Saam; J. Mark Powell; Douglas B. Jackson-Smith; William L. Bland; Joshua L. Posner

2005-01-01

295

Examining the impact of the precision of address geocoding on estimated density of crime locations

This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23

Yutaka Harada; Takahito Shimada

2006-01-01

296

Kernel Bandwidth Estimation in Methods based on Probability Density Function Modelling

for segment- ing modulated signals and local terrain orientation es- timated from Synthetic Aperture RadarKernel Bandwidth Estimation in Methods based on Probability Density Function Modelling Adrian G approximation and the modelling ability are controlled by the kernel band- width. In this paper we propose

Bors, Adrian

297

Consequences of Spurious Modes in Density Estimates for Defining Clusters in Multiple Dimensions

of distinct clusters. In this note, we provide counterexamples to each notion. KEY WORDS: Kernel function. Hierarchical methods based on various distance metrics and agglomeration rules have been widely appliedÂ density regions. Since no single fixedÂbandwidth kernel estimator can capture all modes, this approach has

Scott, David W.

298

Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data

NASA Astrophysics Data System (ADS)

Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy, we assume a flat universe and that the weak energy condition is satisfied, and we allow the density of dark energy, ?X(z), to be an arbitrary function of redshift. Using simulated data from a space-based SN pencil-beam survey, we find that by optimizing the number of parameters used to parameterize the dimensionless dark energy density, f(z)=?X(z)/?X(z=0), we can obtain an unbiased estimate of both f(z) and the fractional matter density of the universe, ?m. A plausible SN pencil-beam survey (with a square degree field of view and for an observational duration of 1 yr) can yield about 2000 SNe Ia with 0<=z<=2. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of ?(mint)=0.16 mag. We find that for such an idealized survey, ?m can be measured to 10% accuracy, and the dark energy density can be estimated to ~20% to z~1.5, and ~20%-40% to z~2, depending on the time dependence of the true dark energy density. Dark energy densities that vary more slowly can be more accurately measured. For the anticipated Supernova/Acceleration Probe (SNAP) mission, ?m can be measured to 14% accuracy, and the dark energy density can be estimated to ~20% to z~1.2. Our results suggest that SNAP may gain much sensitivity to the time dependence of the dark energy density and ?m by devoting more observational time to the central pencil-beam fields to obtain more SNe Ia at z>1.2. We use both a maximum likelihood analysis and a Monte Carlo analysis (when appropriate) to determine the errors of estimated parameters. We find that the Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis.

Wang, Yun; Lovelace, Geoffrey

2001-12-01

299

Population density estimated from locations of individuals on a passive detector array

The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

2009-01-01

300

NSDL National Science Digital Library

What is Density? Density is the amount of "stuff" in a given "space". In science terms that means the amount of "mass" per unit "volume". Using units that means the amount of "grams" per "centimeters cubed". Check out the following links and learn about density through song! Density Beatles Style Density Chipmunk Style Density Rap Enjoy! ...

Witcher, Miss

2011-10-06

301

Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix. PMID:23858479

Wen, Xiaotong; Rangarajan, Govindan; Ding, Mingzhou

2013-01-01

302

NASA Astrophysics Data System (ADS)

Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.

Erkyihun, S. T.

2013-12-01

303

Estimating absolute salinity (SA) in the world's oceans using density and composition

NASA Astrophysics Data System (ADS)

The practical (Sp) and reference (SR) salinities do not account for variations in physical properties such as density and enthalpy. Trace and minor components of seawater, such as nutrients or inorganic carbon affect these properties. This limitation has been recognized and several studies have been made to estimate the effect of these compositional changes on the conductivity-density relationship. These studies have been limited in number and geographic scope. Here, we combine the measurements of previous studies with new measurements for a total of 2857 conductivity-density measurements, covering all of the world's major oceans, to derive empirical equations for the effect of silica and total alkalinity on the density and absolute salinity of the global oceans and to recommend an equation applicable to most of the world's oceans. The potential impact on salinity as a result of uptake of anthropogenic CO2 is also discussed.

Woosley, Ryan J.; Huang, Fen; Millero, Frank J.

2014-11-01

304

Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

NASA Astrophysics Data System (ADS)

Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

2009-04-01

305

Single-sensor, cue-counting density estimation of highly broadband marine mammal calls.

Odontocete echolocation clicks have been used as a preferred cue for density estimation studies from single-sensor data sets, studies that require estimating detection probability as a function of range. Many such clicks can be very broadband in nature, with 10-dB bandwidths of 20 to 40 kHz or more. Because detection distances are not realizable from single-sensor data, the detection probability is estimated in a Monte Carlo simulation using the sonar equation along with transmission loss calculations to estimate the received signal-to-noise ratio of tens of thousands of click realizations. Continuous-wave (CW) analysis, that is, single-frequency analysis, is inherent to basic forms of the passive sonar equation. Considering transmission loss by using CW analysis with the click's center frequency while disregarding its bandwidth has recently been shown to introduce bias to detection probabilities and hence to population estimates. In this study, false killer whale (Pseudorca crassidens) clicks recorded off the Kona coast of Hawai'i are used to quantify the bias in sonar equation density estimates caused by the center-frequency approach. A different approach to analyze data sets with highly broadband calls and to correctly model such signals is also presented and evaluated. [Work supported by ONR.]. PMID:25235719

Küsel, Elizabeth T; Siderius, Martin; Mellinger, David K

2014-04-01

306

Estimating density dependence in time-series of age-structured populations.

For a life history with age at maturity alpha, and stochasticity and density dependence in adult recruitment and mortality, we derive a linearized autoregressive equation with time-lags of from 1 to alpha years. Contrary to current interpretations, the coefficients for different time-lags in the autoregressive dynamics do not simply measure delayed density dependence, but also depend on life-history parameters. We define a new measure of total density dependence in a life history, D, as the negative elasticity of population growth rate per generation with respect to change in population size, D = - partial differential lnlambda(T)/partial differential lnN, where lambda is the asymptotic multiplicative growth rate per year, T is the generation time and N is adult population size. We show that D can be estimated from the sum of the autoregression coefficients. We estimated D in populations of six avian species for which life-history data and unusually long time-series of complete population censuses were available. Estimates of D were in the order of 1 or higher, indicating strong, statistically significant density dependence in four of the six species. PMID:12396510

Lande, R; Engen, S; Saether, B-E

2002-01-01

307

An estimate of the H2 density in the atomic hydrogen cloud of Titan

NASA Astrophysics Data System (ADS)

Charged particle experiments on the Voyager spacecraft at Saturn can be used to provide some useful estimates on the charge exchange loss rate of magnetospheric particles in the atomic hydrogen cloud of Titan. The thermal plasma instrument measured the number density and ion and electron temperatures of the corotating plasma and thus the charge exchange loss time scale of the neutral gas; the low-energy particle detectors measured the flux of the energetic neutrals generated by charge exchange recombination of the hot magnetospheric plasma. These observational results together with known reaction rate coefficients can be used to compute the total H + H2 density in the hydrogen torus. As the Voyager UV spectrometer experiment determined the average number density of hydrogen atoms independently, a limit on the H2 density in the neutral torus region can be estimated. This method leads to an H2 density value of less than or approximately equal to 10/cu cm, considerably less than the limiting value for the ballistic motion of the neutral particles to be collisionally dominated.

Ip, W.-H.

1984-04-01

308

Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.

We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values. PMID:19881896

Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A

2009-06-01

309

NASA Astrophysics Data System (ADS)

Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.

Rastigejev, Y.; Semakin, A. N.

2013-12-01

310

We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

Somershoe, S.G.; Twedt, D.J.; Reid, B.

2006-01-01

311

NSDL National Science Digital Library

What is density? Density is a relationship between mass (usually in grams or kilograms) and volume (usually in L, mL or cm 3 ). Below are several sights to help you further understand the concept of density. Click the following link to review the concept of density. Be sure to read each slide and watch each video: Chemistry Review: Density Watch the following video: Pop density video The following is a fun interactive sight you can use to review density. Your job is #1, to play and #2 to calculate the density of the ...

Hansen, Mr.

2010-10-26

312

Estimation of the local density of states on a quantum computer

We report an efficient quantum algorithm for estimating the local density of states (LDOS) on a quantum computer. The LDOS describes the redistribution of energy levels of a quantum system under the influence of a perturbation. Sometimes known as the 'strength function' from nuclear spectroscopy experiments, the shape of the LDOS is directly related to the survivial probability of unperturbed eigenstates, and has recently been related to the fidelity decay (or 'Loschmidt echo') under imperfect motion reversal. For quantum systems that can be simulated efficiently on a quantum computer, the LDOS estimation algorithm enables an exponential speedup over direct classical computation.

Emerson, Joseph; Cory, David [Department of Nuclear Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Lloyd, Seth [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Poulin, David [Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L 3G1 (Canada)

2004-05-01

313

The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL

2013-01-01

314

Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals.

Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km² (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals. PMID:21166714

Kéry, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J Andrew

2011-04-01

315

The application of Akaike information criterion based pruning to nonparametric density estimates

This paper examines the application of Akaike (1974) information criterion (AIC) based pruning to the refinement of nonparametric density estimates obtained via the adaptive mixtures (AM) procedure of Priebe (see JASA, vol.89, no.427, p.796-806, 1994) and Marchette. The paper details a new technique that uses these two methods in conjunction with one another to predict the appropriate number of terms

Jeff Solka; Carey Priebe; George Rogers; Wendy Poston; David Marchette

1994-01-01

316

Estimating snowshoe hare population density from pellet plots: a further evaluation

We counted fecal pellets of snowshoe hares (Lepus americanus) once a year in 10 areas in the southwestern Yukon from 1987 to 1996. Pellets in eighty 0.155-m2 quadrats were counted and cleared each June on all areas, and we correlated these counts with estimates of absolute hare density obtained by intensive mark-recapture methods in the same areas. There is a

Charles J. Krebs; Rudy Boonstra; Vilis Nams; Mark O'Donoghue; Karen E. Hodges; Stan Boutin

2001-01-01

317

Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

2011-01-01

318

Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions. PMID:24746022

McCabe, Patrick; Korb, Oliver; Cole, Jason

2014-05-27

319

NASA Astrophysics Data System (ADS)

A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

2012-06-01

320

NSDL National Science Digital Library

Students will explain the concept of and be able to calculate density based on given volumes and masses. Throughout today's assignment, you will need to calculate density. You can find a density calculator at this site. Make sure that you enter the correct units. For most of the problems, grams and cubic centimeters will lead you to the correct answer: Density Calculator What is Density? Visit the following website to answer questions ...

Petersen, Mrs.

2013-10-28

321

NSDL National Science Digital Library

This page introduces students to the concept of density by presenting its definition, formula, and two blocks representing materials of different densities. Students are given the mass and volume of each block and asked to calculate the density. Their answers are then compared against a table of densities of common objects (air, wood, gold, etc.) and students must determine, using the density of the blocks, which substance makes up each block.

Carpi, Anthony

2003-01-01

322

Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of <66 % for triangulation and <47 % for line transect surveys at the 5 % significance level with a statistical power of 50 %. This case study shows that both methods may provide estimates with similar accuracy but that line transects can result in more precise estimates and allow assessment of other primate species. For a rapid assessment of gibbon density under time and financial constraints, the triangulation method also may be appropriate. PMID:23538477

Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias

2013-02-01

323

Optimal diffusion MRI acquisition for fiber orientation density estimation: an analytic approach.

An important challenge in the design of diffusion MRI experiments is how to optimize statistical efficiency, i.e., the accuracy with which parameters can be estimated from the diffusion data in a given amount of imaging time. In model-based spherical deconvolution analysis, the quantity of interest is the fiber orientation density (FOD). Here, we demonstrate how the spherical harmonics (SH) can be used to form an explicit analytic expression for the efficiency of the minimum variance (maximally efficient) linear unbiased estimator of the FOD. Using this expression, we calculate optimal b-values for maximum FOD estimation efficiency with SH expansion orders of L = 2, 4, 6, and 8 to be approximately b = 1,500, 3,000, 4,600, and 6,200 s/mm(2), respectively. However, the arrangement of diffusion directions and scanner-specific hardware limitations also play a role in determining the realizable efficiency of the FOD estimator that can be achieved in practice. We show how some commonly used methods for selecting diffusion directions are sometimes inefficient, and propose a new method for selecting diffusion directions in MRI based on maximizing the statistical efficiency. We further demonstrate how scanner-specific hardware limitations generally lead to optimal b-values that are slightly lower than the ideal b-values. In summary, the analytic expression for the statistical efficiency of the unbiased FOD estimator provides important insight into the fundamental tradeoff between angular resolution, b-value, and FOD estimation accuracy. PMID:19603409

White, Nathan S; Dale, Anders M

2009-11-01

324

Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ?3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122

Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

2014-01-01

325

A New Robust Approach for Highway Traffic Density Estimation Fabio Morbidi, Luis LeÂ´on Ojeda for the uncertain graph-constrained Switching Mode Model (SMM), which we use to describe the highway traffic density density reconstruction via a switching observer, in an instrumented 2.2 km highway section of Grenoble

Paris-Sud XI, UniversitÃ© de

326

We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

Crowe, D.E.; Longshore, K.M.

2010-01-01

327

Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study.

Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing

2014-01-01

328

Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

NASA Astrophysics Data System (ADS)

The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

2013-12-01

329

In this paper, a wavelet-based iterative learning control (WILC) scheme with Fuzzy PD feedback is presented for a pneumatic control system with nonsmooth nonlinearities and uncertain parameters. The wavelet transform is employed to extract the learnable dynamics from measured output signal before it can be used to update the control profile. The wavelet transform is adopted to decompose the original signal into many low-resolution signals that contain the learnable and unlearnable parts. The desired control profile is then compared with the learnable part of the transformed signal. Thus, the effects from unlearnable dynamics on the controlled system can be attenuated by a Fuzzy PD feedback controller. As for the rules of Fuzzy PD controller in the feedback loop, a genetic algorithm (GA) is employed to search for the inference rules of optimization. A proportional-valve controlled pneumatic cylinder actuator system is used as the control target for simulation. Simulation results have shown a much-improved posi...

Huang, C E

2008-01-01

330

NASA Astrophysics Data System (ADS)

This paper presents a wavelet-based multifractal approach to characterize the statistical properties of temporal distribution of the 1982-2012 seismic activity in Mammoth Mountain volcano. The fractal analysis of time-occurrence series of seismicity has been carried out in relation to seismic swarm in association with magmatic intrusion happening beneath the volcano on 4 May 1989. We used the wavelet transform modulus maxima based multifractal formalism to get the multifractal characteristics of seismicity before, during, and after the unrest. The results revealed that the earthquake sequences across the study area show time-scaling features. It is clearly perceived that the multifractal characteristics are not constant in different periods and there are differences among the seismicity sequences. The attributes of singularity spectrum have been utilized to determine the complexity of seismicity for each period. Findings show that the temporal distribution of earthquakes for swarm period was simpler with respect to pre- and post-swarm periods.

Zamani, Ahmad; Kolahi Azar, Amir Pirouz; Safavi, Ali Akbar

2014-06-01

331

This paper proposes a method to localize a mobile station in an indoor environment using wavelet- based features (WBF) extracted from the channel impulse response (CIR) in conjunction with an artificial neural network (ANN). The proposed localization system makes use of the fingerprinting technique and employs CIR information as the signature and an artificial neural network as the pattern matching

Chahé NERGUIZIAN; Vahé NERGUIZIAN

2007-01-01

332

NASA Astrophysics Data System (ADS)

Over 700 weekly-spaced vertical profiles of aerosol number density have been archived during 14-year period (October 1986-September 2000) using a bi-static Argon ion lidar system at the Indian Institute of Tropical Meteorology, Pune (18°43?N, 73°51?E, 559 m above mean sea level), India. The monthly resolved time series of aerosol distributions within the atmospheric boundary layer as well as at different altitudes aloft have been subjected to the wavelet-based spectral analysis to investigate different characteristic periodicities present in the long-term dataset. The solar radiometric aerosol optical depth (AOD) measurements over the same place during 1998-2003 have also been analyzed with the wavelet technique. Wavelet spectra of both the time series exhibited significant quasi-annual (around 12-14 months) and quasi-biennial (around 22-25 months) oscillations at statistically significant level. An overview on the lidar and radiometric data sets including the wavelet-based spectral analysis procedure is also presented. A brief statistical analysis concerning both annual and interannual variability of lidar and radiometer derived aerosol distributions has been performed to delineate the effect of different dominant seasons and associated meteorological conditions prevailing over the experimental site in Western India. Additionally, the impact of urbanization on the long-term trends in the lidar measurements of aerosol loadings over the experimental site is brought out. This was achieved by using the lidar observations and a preliminary data set built for inferring the urban aspects of the city of Pune, which included population, number of industries and vehicles etc. in the city.

Pal, S.; Devara, P. C. S.

2012-08-01

333

Very little information is known of the recently described Microcebus tavaratra and Lepilemur milanoii in the Daraina region, a restricted area in far northern Madagascar. Since their forest habitat is highly fragmented and expected to undergo significant changes in the future, rapid surveys are essential to determine conservation priorities. Using both distance sampling and capture-recapture methods, we estimated population densities in two forest fragments. Our results are the first known density and population size estimates for both nocturnal species. In parallel, we compare density results from four different approaches, which are widely used to estimate lemur densities and population sizes throughout Madagascar. Four approaches (King, Kelker, Muller and Buckland) are based on transect surveys and distance sampling, and they differ from each other by the way the effective strip width is estimated. The fifth method relies on a capture-mark-recapture (CMR) approach. Overall, we found that the King method produced density estimates that were significantly higher than other methods, suggesting that it generates overestimates and hence overly optimistic estimates of population sizes in endangered species. The other three distance sampling methods provided similar estimates. These estimates were similar to those obtained with the CMR approach when enough recapture data were available. Given that Microcebus species are often trapped for genetic or behavioral studies, our results suggest that existing data can be used to provide estimates of population density for that species across Madagascar. PMID:22311681

Meyler, Samuel Viana; Salmona, Jordi; Ibouroi, Mohamed Thani; Besolo, Aubin; Rasolondraibe, Emmanuel; Radespiel, Ute; Rabarivola, Clément; Chikhi, Lounes

2012-05-01

334

NASA Astrophysics Data System (ADS)

In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on Synthetic Aperture Radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In the present paper, an innovative parametric estimation methodology for SAR amplitude data is proposed, that takes into account the physical nature of the scattering phenomena generating a SAR image by adopting a generalized Gaussian (GG) model for the backscattering phenomena. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions, and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude probability density function better than several previously proposed parametric models for backscattering phenomena.

Moser, Gabriele; Zerubia, Josiane B.; Serpico, Sebastiano B.

2004-11-01

335

Density-based load estimation using two-dimensional finite element models: a parametric study.

A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530

Bona, Max A; Martin, Larry D; Fischer, Kenneth J

2006-08-01

336

NASA Astrophysics Data System (ADS)

The methods of smoothing of the bispectral density estimate when solving the problems of restoration of signals with an unknown shape in the interference environment and random signal delay are considered for the first time. The performed analysis of statistical characteristics of noise that presents in the bispectrum estimate shows that these statistical characteristics have a rather complex unsteady behavior. An ambiguous selection of a filter optimal by the criterion of the minimum of the root-mean-square error and the minimum of dynamic distortions introduced by the filter seems to be problematic because of the unsteady behavior of counts of the bispectral density estimate and the absence of a priori data on the parameters of the restored signal. Therefore, statistic investigations were performed with the use of linear and nonlinear digital filters with variations of the sliding window sizes. It is shown that the advantages of the proposed approach most pronouncedly manifest themselves with the use of the nonlinear digital filtration and small signal/noise ratios at the input and/or with a small sampling volume of observed implementations. The Kravchenko weight functions are proposed to smooth the bispectrum of the multifrequency signal with a large dynamic range of variations in amplitudes of spectral components. The presented results are of practical interest for use in applications such as radiolocation, hydrolocation, and digital communication.

Zelensky, A. A.; Kravchenko, V. F.; Pavlikov, V. V.; Pustovoit, V. I.; Totsky, A. V.

2014-07-01

337

Concentrations of four heavy metals (Cr, Cu, Ni, and Zn) were measured at 1,082 sampling sites in Changhua county of central Taiwan. A hazard zone is defined in the study as a place where the content of each heavy metal exceeds the corresponding control standard. This study examines the use of spatial analysis for identifying multiple soil pollution hotspots in the study area. In a preliminary investigation, kernel density estimation (KDE) was a technique used for hotspot analysis of soil pollution from a set of observed occurrences of hazards. In addition, the study estimates the hazardous probability of each heavy metal using geostatistical techniques such as the sequential indicator simulation (SIS) and indicator kriging (IK). Results show that there are multiple hotspots for these four heavy metals and they are strongly correlated to the locations of industrial plants and irrigation systems in the study area. Moreover, the pollution hotspots detected using the KDE are the almost same to those estimated using IK or SIS. Soil pollution hotspots and polluted sampling densities are clearly defined using the KDE approach based on contaminated point data. Furthermore, the risk of hazards is explored by these techniques such as KDE and geostatistical approaches and the hotspot areas are captured without requiring exhaustive sampling anywhere. PMID:21318015

Lin, Yu-Pin; Chu, Hone-Jay; Wu, Chen-Fa; Chang, Tsun-Kuo; Chen, Chiu-Yang

2011-01-01

338

Kernel density estimation-based real-time prediction for respiratory motion.

Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the standard deviation of the observed data as the error metric. Furthermore, we compared the proposed method with two benchmark methods: most recent sample and an adaptive linear filter. The kernel density estimation-based prediction results demonstrate universally significant improvement over the alternatives and are especially valuable for long lookahead time, when the alternative methods fail to produce useful predictions. PMID:20134084

Ruan, Dan

2010-03-01

339

NASA Astrophysics Data System (ADS)

A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (?15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

Waters, Daniel F.; Cadou, Christopher P.

2014-02-01

340

NASA Astrophysics Data System (ADS)

An important parameter in the experimental study of dynamics of combustion is the probability distribution of the effective Rayleigh scattering cross section. This cross section cannot be observed directly. Instead, pairs of measurements of laser intensities and Rayleigh scattering counts are observed. Our aim is to provide estimators for the probability density function of the scattering cross section from such measurements. The probability distribution is derived first for the number of recorded photons in the Rayleigh scattering experiment. In this approach the laser intensity measurements are treated as known covariates. This departs from the usual practice of normalizing the Rayleigh scattering counts by the laser intensities. For distributions supported on finite intervals two one based on expansion of the density in

Hengartner, Nicolas; Talbot, Lawrence; Shepherd, Ian; Bickel, Peter

1995-06-01

341

NASA Technical Reports Server (NTRS)

A recent study (Desai, 2008) has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers (Spirit and Opportunity) and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing Desai's reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon entry, descent and landing. Desai's results have raised a question as to whether there is a systemic problem within Mars atmospheric models. Proposal is to compare Mars atmospheric density estimates from Mars atmospheric models to measurements made by Mars Global Surveyor (MGS). Comparison study requires the completion of several tasks that would result in a greater understanding of reasons behind the discrepancy found during recent landings on Mars and possible solutions to this problem.

Justh, Hilary L.; Justus, C. G.

2009-01-01

342

Simple method to estimate MOS oxide-trap, interface-trap, and border-trap densities

Recent work has shown that near-interfacial oxide traps that communicates with the underlaying Si (``border traps``) can play a significant role in determining MOS radiation response and long-term reliability. Thermally-stimulated-current 1/f noise, and frequency-dependent charge-pumping measurements have been used to estimate border-trap densities in MOS structures. These methods all require high-precision, low-noise measurements that are often difficult to perform and interpret. In this summary, we describe a new dual-transistor method to separate bulk-oxide-trap, interface-trap, and border-trap densities in irradiated MOS transistors that requires only standard threshold-voltage and high-frequency charge-pumping measurements.

Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R.

1993-09-01

343

Axonal and dendritic density field estimation from incomplete single-slice neuronal reconstructions

Neuronal information processing in cortical networks critically depends on the organization of synaptic connectivity. Synaptic connections can form when axons and dendrites come in close proximity of each other. The spatial innervation of neuronal arborizations can be described by their axonal and dendritic density fields. Recently we showed that potential locations of synapses between neurons can be estimated from their overlapping axonal and dendritic density fields. However, deriving density fields from single-slice neuronal reconstructions is hampered by incompleteness because of cut branches. Here, we describe a method for recovering the lost axonal and dendritic mass. This so-called completion method is based on an estimation of the mass inside the slice and an extrapolation to the space outside the slice, assuming axial symmetry in the mass distribution. We validated the method using a set of neurons generated with our NETMORPH simulator. The model-generated neurons were artificially sliced and subsequently recovered by the completion method. Depending on slice thickness and arbor extent, branches that have lost their outside parents (orphan branches) may occur inside the slice. Not connected anymore to the contiguous structure of the sliced neuron, orphan branches result in an underestimation of neurite mass. For 300 ?m thick slices, however, the validation showed a full recovery of dendritic and an almost full recovery of axonal mass. The completion method was applied to three experimental data sets of reconstructed rat cortical L2/3 pyramidal neurons. The results showed that in 300 ?m thick slices intracortical axons lost about 50% and dendrites about 16% of their mass. The completion method can be applied to single-slice reconstructions as long as axial symmetry can be assumed in the mass distribution. This opens up the possibility of using incomplete neuronal reconstructions from open-access data bases to determine population mean mass density fields. PMID:25009472

van Pelt, Jaap; van Ooyen, Arjen; Uylings, Harry B. M.

2014-01-01

344

NASA Technical Reports Server (NTRS)

It is very useful to have a method of estimation for electron temperature and electron densities in nuclear pumped plasmas because measurements of such quantities are very difficult. This paper describes a method, based on rate equation analysis of the ionized species in the plasma and the electron energy balance. In addition to the ionized species, certain neutral species must also be calculated. Examples are given for pure helium and a mixture of helium and argon. In the HeAr case, He(+), He2(+), He/2 3S/, Ar(+), Ar2(+), and excited Ar are evaluated.

Depaola, B. D.; Marcum, S. D.; Wrench, H. K.; Whitten, B. L.; Wells, W. E.

1979-01-01

345

Manipulating decay time for efficient large-mammal density estimation: gorillas and dung height.

Large-mammal surveys often rely on indirect signs such as dung or nests. Sign density is usually translated into animal density using sign production and decay rates. In principle, such auxiliary variable estimates should be made in a spatially unbiased manner. However, traditional decay rate estimation methods entail following many signs from production to disappearance, which, in large study areas, requires extensive travel effort. Consequently, decay rate estimates have tended to be made instead at some convenient but unrepresentative location. In this study we evaluated how much bias might be induced by extrapolating decay rates from unrepresentative locations, how much effort would be required to implement current methods in a spatially unbiased manner, and what alternate approaches might be used to improve precision. To evaluate the extent of bias induced by unrepresentative sampling, we collected data on gorilla dung at several central African sites. Variation in gorilla dung decay rate was enormous, varying by up to an order of magnitude within and between survey zones. We then estimated what the effort-precision relationship would be for a previously suggested "retrospective" decay rate (RDR) method, if it were implemented in a spatially unbiased manner. We also evaluated precision for a marked sign count (MSC) approach that does not use a decay rate. Because they require repeat visits to remote locations, both RDR and MSC require enormous effort levels in order to gain precise density estimates. Finally, we examined an objective criterion for decay (i.e., dung height). This showed great potential for improving RDR efficiency because choosing a high threshold height for decay reduces decay time and, consequently, the number of visits that need to be made to remote areas. The ability to adjust decay time using an objective decay criterion also opens up the potential for a "prospective" decay rate (PDR) approach. Further research is necessary to evaluate whether the temporal bias inherent in such an approach is small enough to ignore, given the 10-20-fold increases in precision promised by a PDR approach. PMID:18213978

Kuehl, Hjalmar S; Todd, Angelique; Boesch, Christophe; Walsh, Peter D

2007-12-01

346

Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers. PMID:24808204

Budka, Marcin; Gabrys, Bogdan

2013-01-01

347

Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

NASA Technical Reports Server (NTRS)

Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

Simon, Dan; Simon, Donald L.

2006-01-01

348

Estimates of Leaf Vein Density Are Scale Dependent1[C][W][OPEN

Leaf vein density (LVD) has garnered considerable attention of late, with numerous studies linking it to the physiology, ecology, and evolution of land plants. Despite this increased attention, little consideration has been given to the effects of measurement methods on estimation of LVD. Here, we focus on the relationship between measurement methods and estimates of LVD. We examine the dependence of LVD on magnification, field of view (FOV), and image resolution. We first show that estimates of LVD increase with increasing image magnification and resolution. We then demonstrate that estimates of LVD are higher with higher variance at small FOV, approaching asymptotic values as the FOV increases. We demonstrate that these effects arise due to three primary factors: (1) the tradeoff between FOV and magnification; (2) geometric effects of lattices at small scales; and; (3) the hierarchical nature of leaf vein networks. Our results help to explain differences in previously published studies and highlight the importance of using consistent magnification and scale, when possible, when comparing LVD and other quantitative measures of venation structure across leaves. PMID:24259686

Price, Charles A.; Munro, Peter R.T.; Weitz, Joshua S.

2014-01-01

349

Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID:23209547

Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.

2012-01-01

350

We report on Transition Region And Coronal Explorer 171 A observations of the GOES X20 class flare on 2001 April 2 that shows EUV flare ribbons with intense diffraction patterns. Between the 11th to 14th order, the diffraction patterns of the compact flare ribbon are dispersed into two sources. The two sources are identified as emission from the Fe IX line at 171.1 A and the combined emission from Fe X lines at 174.5, 175.3, and 177.2 A. The prominent emission of the Fe IX line indicates that the EUV-emitting ribbon has a strong temperature component near the lower end of the 171 A temperature response ({approx}0.6-1.5 MK). Fitting the observation with an isothermal model, the derived temperature is around 0.65 MK. However, the low sensitivity of the 171 A filter to high-temperature plasma does not provide estimates of the emission measure for temperatures above {approx}1.5 MK. Using the derived temperature of 0.65 MK, the observed 171 A flux gives a density of the EUV ribbon of 3 x 10{sup 11} cm{sup -3}. This density is much lower than the density of the hard X-ray producing region ({approx}10{sup 13} to 10{sup 14} cm{sup -3}) suggesting that the EUV sources, though closely related spatially, lie at higher altitudes.

Krucker, Saem; Raftery, Claire L.; Hudson, Hugh S., E-mail: krucker@ssl.berkeley.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States)

2011-06-10

351

Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson’s disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher’s linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

2014-01-01

352

Wavelet series method for reconstruction and spectral estimation of laser Doppler velocimetry data

NASA Astrophysics Data System (ADS)

Many techniques have been developed in order to obtain spectral density function from randomly sampled data, such as the computation of a slotted autocovariance function. Nevertheless, one may be interested in obtaining more information from laser Doppler signals than a spectral content, using more or less complex computations that can be easily conducted with an evenly sampled signal. That is the reason why reconstructing an evenly sampled signal from the original LDV data is of interest. The ability of a wavelet-based technique to reconstruct the signal with respect to statistical properties of the original one is explored, and spectral content of the reconstructed signal is given and compared with estimated spectral density function obtained through classical slotting technique. Furthermore, LDV signals taken from a screeching jet are reconstructed in order to perform spectral and bispectral analysis, showing the ability of the technique in recovering accurate information's with only few LDV samples.

Jaunet, Vincent; Collin, Erwan; Bonnet, Jean-Paul

2012-01-01

353

NASA Astrophysics Data System (ADS)

This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

2009-09-01

354

Comparison of breast percent density estimation from raw versus processed digital mammograms

NASA Astrophysics Data System (ADS)

We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

2011-03-01

355

Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

Dorazio, Robert M.

2013-01-01

356

SAR amplitude probability density function estimation based on a generalized Gaussian model.

In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

2006-06-01

357

Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

NASA Astrophysics Data System (ADS)

We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

2012-12-01

358

A protocol is presented to estimate the surface density and anisotropic polarizability of molecules adsorbed on the surface of a dielectric resonator of uniform refractive index. Measurement of resonance wavelength shift of transverse electric and transverse magnetic whispering gallery modes in the resonator gives the product of the surface density and the polarizability components normal and tangential to the resonator

Iwao Teraoka; Stephen Arnold

2007-01-01

359

on the amino acid type. Then by computing tail probabilities which are based on amino-acid conditional density estimation on a high-dimensional torus. In Section 2.1 we introduce toriodal kernels for kernel density the bivariate distributions of angles are dependent on the amino acid type. Various validation scores, which can

Barber, Stuart

360

We revisit the multifractal analysis of high resolution temporal rainfall using the wavelet transform modulus maxima (WTMM) method. Specifically, we employ a cumulant analysis of the logarithm of the WTMM coefficients to estimate the scaling exponent spectrum ?(q) and the spectrum of singularities D(h). We document that rainfall intensity fluctuations exhibit multifractality from scales of the order of 4–5 minutes

V. Venugopal; Stéphane G. Roux; Efi Foufoula-Georgiou; Alain Arnéodo

2006-01-01

361

A New Estimate of the Star Formation Rate Density in the HDFN

NASA Astrophysics Data System (ADS)

We measured the evolution of SFRD in the HDFN by comparing the available multi-color information on galaxy SEDs with a library of model fluxes, provided by the codes of Bruzual & Charlot (1993, ApJ 405, 538) and Leitherer et al. (1999, ApJS 123, 3). For each HDFN galaxy the best fitting template was used to estimate the redshift, the amount of dust obscuration and the un-reddened UV density at 1500 Å. The results are plotted in the figure, where a realistic estimate of the errors was obtained by considering the effects of field-to-field variations (Fontana et. al., 1999, MNRAS, 310L). We did not correct for sample incompleteness, and the corrections for dust absorption in the estimates of Connolly et al. (1997, ApJ 486, 11L; C97) and Madau et. al. (1998, ApJ 498, 106; M98) were calculated according to Steidel et. al. (1999, ApJ 519, 1; S99). Our measured points show a peak at z ˜ 3, being consistent with those measured, in the same z interval, from rest-frame FIR emission (Barger et. al., 2000, AJ 119, 2092; SCUBA). We did correct for dust obscuration by estimating the reddening object by object, and not by considering a mean value of E(B -V) as in S99. Such correction does not depend linearlyon E(B -V): we did find a ratio ˜ 14 between un-reddened and reddened SFRD, ˜ 3 times greater than in S99, despite getting a mean value of color excess < E(B - V) > = 0.14 as in S99. Since we did not take into account sample incompleteness and surface brightness dimming effects, the decline of the SFRD at z ˜ 4 could be questionable.

Massarotti, M.; Iovino, A.

362

The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088

Garde, Ainara; Karlen, Walter; Ansermino, J. Mark; Dumont, Guy A.

2014-01-01

363

The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088

Garde, Ainara; Karlen, Walter; Ansermino, J Mark; Dumont, Guy A

2014-01-01

364

The (maximum) penalized-likelihood method of probability density estimation and bump-hunting is improved and exemplified by applications to scattering and chondrite data. We show how the hyperparameter in the method can be satisfactorily estimated by using statistics of goodness of fit. A Fourier expansion is found to be usually more expeditious than a Hermite expansion but a compromise is useful. The

I. J. Good; R. A. Gaskins

1980-01-01

365

Stochastic wavelet-based image modeling using factor graphs and its application to denoising

NASA Astrophysics Data System (ADS)

In this work, we introduce a hidden Markov field model for wavelet image coefficients within a subband and apply it to the image denoising problem. Specifically, we propose to model wavelet image coefficients within subbands as Gaussian random variables with parameters determined by the underlying hidden Markov process. Our model is inspired by the recent Estimation-Quantization (EQ) image coder and its excellent performance in compression. To reduce the computational complexity we apply a novel factor graph framework to combine two 1-D hidden Markov chain models to approximate a hidden Markov Random field (HMRF) model. We then apply the proposed models for wavelet image coefficients to perform an approximate Minimum Mean Square Error (MMSE) estimation procedure to restore an image corrupted by an additive white Gaussian noise. Our results are among the state-of-the-art in the field and they indicate the promise of the proposed modeling techniques.

Xiao, Shu; Kozintsev, Igor V.; Ramchandran, Kannan

2000-04-01

366

NASA Astrophysics Data System (ADS)

Needle insertion planning for digital breast tomosynthesis (DBT) guided biopsy has the potential to improve patient comfort and intervention safety. However, a relevant planning should take into account breast tissue deformation and lesion displacement during the procedure. Deformable models, like finite elements, use the elastic characteristics of the breast to evaluate the deformation of tissue during needle insertion. This paper presents a novel approach to locally estimate the Young's modulus of the breast tissue directly from the DBT data. The method consists in computing the fibroglandular percentage in each of the acquired DBT projection images, then reconstructing the density volume. Finally, this density information is used to compute the mechanical parameters for each finite element of the deformable mesh, obtaining a heterogeneous DBT based breast model. Preliminary experiments were performed to evaluate the relevance of this method for needle path planning in DBT guided biopsy. The results show that the heterogeneous DBT based breast model improves needle insertion simulation accuracy in 71% of the cases, compared to a homogeneous model or a binary fat/fibroglandular tissue model.

Vancamberg, Laurence; Geeraert, Nausikaa; Iordache, Razvan; Palma, Giovanni; Klausz, Rémy; Muller, Serge

2011-03-01

367

Density estimation in aerial images of large crowds for automatic people counting

NASA Astrophysics Data System (ADS)

Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

Herrmann, Christian; Metzler, Juergen

2013-05-01

368

NASA Astrophysics Data System (ADS)

A wavelet-based image matching method was developed for removal of normal anatomic structures in chest radiographs for reduction of false positives reported by our computer- aided diagnosis (CAD) scheme for detection of lung nodules. In our approach, two regions of interest (ROIs) are extracted, one from the position where a candidate of a nodule is located, and the other from the position located at a point symmetric to the first position relative to the spine. The second ROI contains normal anatomic structures similar to those of the first ROI. A non-linear functional representing the squared differences between the two images is formulated, and is minimized by a coarse-to-fine approach to yield a planar mapping that matches the two similar images. A smoothing term is added to the non-linear functional, which penalizes discontinuous and irregular mappings. If no structure remains in the difference between these matched images, then the first ROI is identified to be a false detection (i.e., it contains only normal structures); otherwise, it is regarded as a nodule (i.e., it contains an abnormal structure). A preliminary result shows that our method is effective in removing normal anatomic structures and thus is useful for substantially reducing the number of false detections in our CAD scheme.

Yoshida, Hiroyuki

1998-10-01

369

A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in (137)Cs and (90)Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann-Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements. PMID:23524202

Paul, Sabyasachi; Suman, V; Sarkar, P K; Ranade, A K; Pulhani, V; Dafauti, S; Datta, D

2013-08-01

370

NASA Technical Reports Server (NTRS)

The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

2001-01-01

371

Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

NASA Technical Reports Server (NTRS)

This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

2010-01-01

372

Estimating basin thickness using a high-density passive-source geophone array

NASA Astrophysics Data System (ADS)

In 2010 an array of 834 single-component geophones was deployed across the Bighorn Mountain Range in northern Wyoming as part of the Bighorn Arch Seismic Experiment (BASE). The goal of this deployment was to test the capabilities of these instruments as recorders of passive-source observations in addition to active-source observations for which they are typically used. The results are quite promising, having recorded 47 regional and teleseismic earthquakes over a two-week deployment. These events ranged from magnitude 4.1 to 7.0 (mb) and occurred at distances up to 10°. Because these instruments were deployed at ca. 1000 m spacing we were able to resolve the geometries of two major basins from the residuals of several well-recorded teleseisms. The residuals of these arrivals, converted to basinal thickness, show a distinct westward thickening in the Bighorn Basin that agrees with industry-derived basement depth information. Our estimates of thickness in the Powder River Basin do not match industry estimates in certain areas, likely due to localized high-velocity features that are not included in our models. Thus, with a few cautions, it is clear that high-density single-component passive arrays can provide valuable constraints on basinal geometries, and could be especially useful where basinal geometry is poorly known.

O'Rourke, C. T.; Sheehan, A. F.; Erslev, E. A.; Miller, K. C.

2014-09-01

373

: We generalize the so-called wavelet transform modulus maxima (WTMM) method to multifractal image analysis. We show that the\\u000a implementation of this method provides very efficient numerical techniques to characterize statistically the roughness fluctuations\\u000a of fractal surfaces. We emphasize the wide range of potential applications of this wavelet-based image processing method in\\u000a fundamental as well as applied sciences. This paper

A. Arnéodo; N. Decoster; S. G. Roux

2000-01-01

374

NASA Astrophysics Data System (ADS)

We revisit the multifractal analysis of high resolution temporal rainfall using the wavelet transform modulus maxima (WTMM) method. Specifically, we employ a cumulant analysis of the logarithm of the WTMM coefficients to estimate the scaling exponent spectrum ?(q) and the spectrum of singularities D(h). We document that rainfall intensity fluctuations exhibit multifractality from scales of the order of 4 5 minutes up to the storm-pulse duration of 1 2 hours. We also establish long-range dependence consistent with that of a multiplicative cascade.

Venugopal, V.; Roux, Stéphane G.; Foufoula-Georgiou, Efi; Arnéodo, Alain

2006-01-01

375

We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of ?1.625?dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

Adib, Mani; Cretu, Edmond

2013-01-01

376

Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

2014-01-01

377

Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

2014-01-01

378

X-Ray Methods to Estimate Breast Density Content in Breast Tissue

NASA Astrophysics Data System (ADS)

This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ? fibrous, lucite ? cancer and polyet ? fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating techniques and one photon counting technique. Contrast, signal to noise ratio (SNR) and pixel values between different regions of interest were analyzed. The two basis function method and two of the energy integrating methods (calibration, beam hardening correction) gave the highest and more linear curves for contrast and SNR.

Maraghechi, Borna

379

Wavelet-based multifractal analysis of field scale variability in soil water retention

NASA Astrophysics Data System (ADS)

Better understanding of spatial variability of soil hydraulic parameters and their relationships to other soil properties is essential to scale-up measured hydraulic parameters and to improve the predictive capacity of pedotransfer functions. The objective of this study was to characterize scaling properties and the persistency of water retention parameters and soil physical properties. Soil texture, bulk density, organic carbon content, and the parameters of the van Genuchten water retention function were determined on 128 soil cores from a 384-m transect with a sandy loam soil, located at Smeaton, SK, Canada. The wavelet transform modulus maxima, or WTMM, technique was used in the multifractal analysis. Results indicate that the fitted water retention parameters had higher small-scale variability and lower persistency than the measured soil physical properties. Of the three distinct scaling ranges identified, the middle region (8-128 m) had a multifractal-type scaling. The generalized Hurst exponent indicated that the measured soil properties were more persistent than the fitted soil hydraulic parameters. The relationships observed here imply that soil physical properties are better predictors of water retention values at larger spatial scales than at smaller scales.

Zeleke, Takele B.; Si, Bing C.

2007-07-01

380

Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

NASA Technical Reports Server (NTRS)

This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.

Grau, David

2012-01-01

381

Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

1982-12-16

382

NASA Astrophysics Data System (ADS)

The effect of using Adaptive Wavelets is investigated for dimension reduction and noise filtering of hyperspectral imagery that is to be subsequently exploited for classification or subpixel analysis. The method is investigated as a possible alternative to the Minimum Noise Fraction (MNF) transform as a preprocessing tool. Unlike the MNF method, the wavelet-transformed method does not require an estimate of the noise covariance matrix that can often be difficult to obtain for complex scenes (such as urban scenes). Another desirable characteristic of the proposed wavelet transformed data is that, unlike Principal Component Analysis (PCA) transformed data, it maintains the same spectral shapes as the original data (the spectra are simply smoothed). In the experiment, an adaptive wavelet image cube is generated using four orthogonal conditions and three vanishing moment conditions. The classification performance of a Derivative Distance Squared (DDS) classifier and a Multilayer Feedforward Network (MLFN) neural network classifier applied to the wavelet cubes is then observed. The performance of the Constrained Energy Minimization (CEM) matched-filter algorithm applied to this data us also observed. HYDICE 210-band imagery containing a moderate amount of noise is used for the analysis so that the noise-filtering properties of the transform can be emphasized. Trials are conducted on a challenging scene with significant locally varying statistics that contains a diverse range of terrain features. The proposed wavelet approach can be automated to require no input from the user.

Rand, Robert S.; Bosch, Edward H.

2004-08-01

383

NASA Astrophysics Data System (ADS)

Reliability of microseismic interpretations is very much dependent on how robustly microseismic events are detected and picked. Various event detection algorithms are available but detection of weak events is a common challenge. Apart from the event magnitude, hypocentral distance, and background noise level, the instrument self-noise can also act as a major constraint for the detection of weak microseismic events in particular for borehole deployments in quiet environments such as below 1.5-2 km depths. Instrument self-noise levels that are comparable or above background noise levels may not only complicate detection of weak events at larger distances but also challenge methods such as seismic interferometry which aim at analysis of coherent features in ambient noise wavefields to reveal subsurface structure. In this paper, we use power spectral densities to estimate the instrument self-noise for a borehole data set acquired during a hydraulic fracturing stimulation using modified 4.5-Hz geophones. We analyse temporal changes in recorded noise levels and their time-frequency variations for borehole and surface sensors and conclude that instrument noise is a limiting factor in the borehole setting, impeding successful event detection. Next we suggest that the variations of the spectral powers in a time-frequency representation can be used as a new criterion for event detection. Compared to the common short-time average/long-time average method, our suggested approach requires a similar number of parameters but with more flexibility in their choice. It detects small events with anomalous spectral powers with respect to an estimated background noise spectrum with the added advantage that no bandpass filtering is required prior to event detection.

Vaezi, Y.; van der Baan, M.

2014-05-01

384

Wavelet-based automatic determination of the P- and S-wave arrivals

NASA Astrophysics Data System (ADS)

The detection of P- and S-wave arrivals is important for a variety of seismological applications including earthquake detection and characterization, and seismic tomography problems such as imaging of hydrocarbon reservoirs. For many years, dedicated human-analysts manually selected the arrival times of P and S waves. However, with the rapid expansion of seismic instrumentation, automatic techniques that can process a large number of seismic traces are becoming essential in tomographic applications, and for earthquake early-warning systems. In this work, we present a pair of algorithms for efficient picking of P and S onset times. The algorithms are based on the continuous wavelet transform of the seismic waveform that allows examination of a signal in both time and frequency domains. Unlike Fourier transform, the basis functions are localized in time and frequency, therefore, wavelet decomposition is suitable for analysis of non-stationary signals. For detecting the P-wave arrival, the wavelet coefficients are calculated using the vertical component of the seismogram, and the onset time of the wave is identified. In the case of the S-wave arrival, we take advantage of the polarization of the shear waves, and cross-examine the wavelet coefficients from the two horizontal components. In addition to the onset times, the automatic picking program provides estimates of uncertainty, which are important for subsequent applications. The algorithms are tested with synthetic data that are generated to include sudden changes in amplitude, frequency, and phase. The performance of the wavelet approach is further evaluated using real data by comparing the automatic picks with manual picks. Our results suggest that the proposed algorithms provide robust measurements that are comparable to manual picks for both P- and S-wave arrivals.

Bogiatzis, P.; Ishii, M.

2013-12-01

385

NASA Astrophysics Data System (ADS)

Atmospheric data collected twice daily from five ocean stations over a period of 7 years were used in a study to correlate ballistic densities at various levels. The study indicates that a good correlation does exist and that tables, nomograms, or data in a suitable format should be produced for use with new and existing fire control computers and range tables prepared for the Fleet. Two exponential functions of altitude were used to express the density-one from surface to 36,000 ft, the other from 36,000 to 65,000 ft. The coefficients were selected in a manner that produces a continuous function from sea level to 65,000 ft. The coefficients of the two functions were determined using a nonlinear least-squares technique. Fluctuations in the density profiles below 5,000 ft altitude were noted and were investigated by fitting a sample of temperature and pressure data with a least-square technique. Ballistic densities for firing at surface targets were computed for altitudes to 50,000 ft. While the current projectiles in the Fleet may not reach an altitude of 50,000 ft., this may become a reality in the near future. The procedure presented in this report could very easily be modified to compute ballistic density when firing at air targets at altitudes up to 50,000 ft.

McAnelly, L. J.

1982-07-01

386

NASA Astrophysics Data System (ADS)

Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

2012-05-01

387

Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

2011-05-15

388

A method for estimating the cholesterol content of the serum low-density lipoprotein fraction (Sf- 0.20)is presented. The method involves measure- ments of fasting plasma total cholesterol, tri- glyceride, and high-density lipoprotein cholesterol concentrations, none of which requires the use of the preparative ultracentrifuge. Cornparison of this suggested procedure with the more direct procedure, in which the ultracentrifuge is used, yielded

William T. Friedewald; Robert I. Levy; Donald S. Fredrickson

1972-01-01

389

Estimation of refractive index and density of lubricants under high pressure by Brillouin scattering

NASA Astrophysics Data System (ADS)

Employing a diamond-anvil cell, Brillouin scattering spectra of 90° and 180° angles for synthetic lubricants (paraffinic and naphthenic oils) were measured and sound velocity, density, and refractive index under high pressure were obtained. The density obtained from the thermodynamic relation was compared with that from Lorentz-Lorentz's formula. The density was also compared with Dowson's density-pressure equation of lubricants, and density-pressure characteristics of the paraffinic oil and naphthenic oil were described considering the molecular structure for solidified lubricants. The effect of such physical properties of lubricants on the elastohydrodynamic lubrication of ball bearings, gears and traction drives was considered.

Nakamura, Y.; Fujishiro, I.; Kawakami, H.

1994-07-01

390

Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The ?,? distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD <2.0 Å), the DPM-HMM method performs as well or better than the best templates, demonstrating that our automated method recaptures these canonical loops without inclusion of any IgG specific terms or manual intervention. In cases with poor or few good templates (mean RMSD >7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638

Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry

2011-10-01

391

through sophisticated flight dynamics [1]. However, for the Air Traffic Control Sys- tem Command Center at an Air Route Traffic Control Center (simply denoted as Center hereafter) level [2]. It forecasts aircraftAn Air Traffic Prediction Model based on Kernel Density Estimation Yi Cao,1 Lingsong Zhang,2

Sun, Dengfeng

392

in near-minimax optimal convergence rates. For piecewise-analytic signals, in particular, the error near minimax conver- gence rates for broad classes of densities and intensities with arbi- trary levels of this estimator converges at nearly the parametric rate. These methods can be further refined in two dimensions

Willett, Rebecca

393

A model-based meta-analysis for estimating species-specific wood density and identifying potential

A model-based meta-analysis for estimating species-specific wood density and identifying potential. We implemented a hierarchical Bayesian (HB) meta-analysis that incorporated sample size, variance overcomes many of the limitations of traditional meta-analyses, and the incorporation of phylogenetic

Lichstein, Jeremy W.

394

Wavelet Based Texture Classification

Textures are one of the basic features in visual searching and computational vision. In the literature, most of the atten- tion has been focussed on the texture features with minimal consideration of the noise models. In this paper we inves- tigated the problem of texture classification from a maximum likelihood perspective. We took into account the texture model, the noise

Nicu Sebe; Michael S. Lew

2000-01-01

395

NASA Astrophysics Data System (ADS)

GPS systems can be used to measure average snow depth in the ˜1000 m2 area around the GPS antenna, a sensing footprint size intermediate between in situ and satellite observations. SWE can be calculated from density estimates modeled on the GPS-based snow depth time series. We assess the accuracy of GPS-based snow depth, density, and SWE data at 18 GPS sites via comparison to manual observations. The manual validation survey was completed around the time of peak accumulation at each site. Daily snow depth derived from GPS reflection data is very similar to the mean snow depth measured manually in the ˜1000 m2 scale area around each antenna. This comparison spans site-averaged depths from 0 to 150 cm. The GPS depth data exhibit a small negative bias (-6 cm) across this range of snow depths. Errors tend to be smaller at sites with more usable GPS ground tracks. Snow bulk density is modeled using the GPS snow depth time series and model parameters are estimated from nearby SNOTEL sites. Modeled density is within 0.02 g cm-3 of the density measured in a single snow pit at the validation sites, for 12 of 18 comparisons. GPS-based depth and modeled density are multiplied to estimate SWE. SWE estimates are very accurate over the range observed at the validation sites, from 0 to 60 cm (R2 = 0.97 and bias = -2 cm). These results show that the near real-time GPS snow products have errors small enough for monitoring water resources in snow-dominated basins.

McCreight, James L.; Small, Eric E.; Larson, Kristine M.

2014-08-01

396

Delayed density-dependent mortality can be a cause of the cyclic patterns in abundance observed in many populations of sockeye salmon (Oncorhynchus nerka). We used a meta-analytical approach to test for delayed density dependence using 34 time series of sockeye data. We found no consistent evidence for delayed density-dependent mortality using spawner - spring fry or spawner-recruit data. We did find

Ransom A. Myers; Michael J. Bradford; Jessica M. Bridson; Gordon Mertz

1997-01-01

397

We present a method of direct estimation of important properties of a shared bipartite quantum state, within the ''distant laboratories'' paradigm, using only local operations and classical communication. We apply this procedure to spectrum estimation of shared states, and locally implementable structural physical approximations to incompletely positive maps. This procedure can also be applied to the estimation of channel capacity and measures of entanglement.

Alves, Carolina Moura [Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, (United Kingdom); Centre for Quantum Computation, DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, (United Kingdom); Horodecki, Pawel [Faculty of Applied Physics and Mathematics, Technical University of Gdansk, 80-952 Gdansk, (Poland); Oi, Daniel K. L. [Centre for Quantum Computation, DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, (United Kingdom); Kwek, L. C. [Department of Natural Sciences, National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616, (Singapore); Ekert, Artur K. [Centre for Quantum Computation, DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, (United Kingdom); Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542, (Singapore)

2003-09-01

398

Establishment of estimation lightning density method with lightning location system data

The lightning strike density level in areas through which power transmission lines will pass is an important factor in designing lightning-proof transmission lines. Formerly, IKL (Tsokeraunic-Level) maps were used to evaluate lighting strike density. However, the authors' studies indicated that the IKL map and the actual record of lightning strike troubles affecting their company's transmission lines do not necessarily correspond.

M. Suzuki; N. Katagiri; K. Ishikawa

1999-01-01

399

A model of the electron density irregularities in the Jovian ionosphere is constructed based on a preliminary interpretation of the Pioneer 10 Jovian ionospheric scintillations. The ionospheric irregularities exist over an altitude range of 3000 km. The structure constant c\\/sub n\\/ of refractive index fluctuations is constant throughout this altitude range. The spatial wavenumber spectrum of the electron density irregularities

R. Woo; F. C. Yang

1975-01-01

400

Background Microscopic examination using Giemsa-stained thick blood films remains the reference standard for detection of malaria parasites and it is the only method that is widely and practically available for quantifying malaria parasite density. There are few published data (there was no study during pregnancy) investigating the parasite density (ratio of counted parasites within a given number of microscopic fields against counted white blood cells (WBCs) using actual number of WBCs. Methods Parasitaemia was estimated using assumed WBCs (8,000), which was compared to parasitaemia calculated based on each woman’s WBCs in 98 pregnant women with uncomplicated Plasmodium falciparum malaria at Medani Maternity Hospital, Central Sudan. Results The geometric mean (SD) of the parasite count was 12,014.6 (9,766.5) and 7,870.8 (19,168.8) ring trophozoites /?l, P <0.001 using the actual and assumed (8,000) WBC count, respectively. The median (range) of the ratio between the two parasitaemias (using assumed/actual WBCs) was 1.5 (0.6-5), i e, parasitaemia calculated assuming WBCs equal to median (range) 1.5 (0.6-5) times higher than parasitaemia calculated using actual WBCs. There were 52 out of 98 patients (53%) with ratio between 0.5 and 1.5. For 21 patients (21%) this ratio was higher than 2, and for five patients (5%) it was higher than 3. Conclusion The estimated parasite density using actual WBC counts was significantly lower than the parasite density estimated using assumed WBC counts. Therefore, it is recommended to use the patient`s actual WBC count in the estimation of the parasite density. PMID:24386962

2014-01-01

401

NASA Technical Reports Server (NTRS)

To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.

Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

1995-01-01

402

Comparison of precision orbit derived density estimates for CHAMP and GRACE satellites

NASA Astrophysics Data System (ADS)

Current atmospheric density models cannot adequately represent the density variations observed by satellites in Low Earth Orbit (LEO). Using an optimal orbit determination process, precision orbit ephemerides (POE) are used as measurement data to generate corrections to density values obtained from existing atmospheric models. Densities obtained using these corrections are then compared to density data derived from the onboard accelerometers of satellites, specifically the CHAMP and GRACE satellites. This comparison takes two forms, cross correlation analysis and root mean square analysis. The densities obtained from the POE method are nearly always superior to the empirical models, both in matching the trends observed by the accelerometer (cross correlation), and the magnitudes of the accelerometer derived density (root mean square). In addition, this method consistently produces better results than those achieved by the High Accuracy Satellite Drag Model (HASDM). For satellites orbiting Earth that pass through Earth's upper atmosphere, drag is the primary source of uncertainty in orbit determination and prediction. Variations in density, which are often not modeled or are inaccurately modeled, cause difficulty in properly calculating the drag acting on a satellite. These density variations are the result of many factors; however, the Sun is the main driver in upper atmospheric density changes. The Sun influences the densities in Earth's atmosphere through solar heating of the atmosphere, as well as through geomagnetic heating resulting from the solar wind. Data are examined for fourteen hour time spans between November 2004 and July 2009 for both the CHAMP and GRACE satellites. This data spans all available levels of solar and geomagnetic activity, which does not include data in the elevated and high solar activity bins due to the nature of the solar cycle. Density solutions are generated from corrections to five different baseline atmospheric models, as well as nine combinations of density and ballistic coefficient correlated half-lives. These half-lives are varied among values of 1.8, 18, and 180 minutes. A total of forty-five sets of results emerge from the orbit determination process for all combinations of baseline density model and half-lives. Each time period is examined for both CHAMP and GRACE-A, and the results are analyzed. Results are averaged from all solutions periods for 2004--2007. In addition, results are averaged after binning according to solar and geomagnetic activity levels. For any given day in this period, a ballistic coefficient correlated half-life of 1.8 minutes yields the best correlation and root mean square values for both CHAMP and GRACE. For CHAMP, a density correlated half-life of 18 minutes is best for higher levels of solar and geomagnetic activity, while for lower levels 180 minutes is usually superior. For GRACE, 180 minutes is nearly always best. The three Jacchia-based atmospheric models yield very similar results. The CIRA 1972 or Jacchia 1971 models as baseline consistently produce the best results for both satellites, though results obtained for Jacchia-Roberts are very similar to the other Jacchia-based models. Data are examined in a similar manner for the extended solar minimum period during 2008 and 2009, albeit with a much smaller sampling of data. With the exception of some atypical results, similar combinations of half-lives and baseline atmospheric model produce the best results. A greater sampling of data will aid in characterizing density in a period of especially low solar activity. In general, cross correlation values for CHAMP and GRACE revealed that the POE method matched trends observed by the accelerometers very well. However, one period of time deviated from this trend for the GRACE-A satellite. Between late October 2005 and January 2006, correlations for GRACE-A were very low. Special examination of the surrounding months revealed the extent of time this period covered. Half-life and baseline model combinations that produced the best results during this time wer

Fattig, Eric Dale

403

NASA Astrophysics Data System (ADS)

Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

2011-12-01

404

Estimation and experimental study of the density and specific heat for alumina nanofluid

This study analyses the density and specific heat of alumina (Al2O3)\\/water nanofluid to determine the feasibility of relative calculations. The Al2O3\\/water nanofluid was produced by the direct-synthesis method with cationic chitosan dispersant served as the experimental sample, and was dispersed into three concentrations of 0.5, 1.0 and 1.5?wt.%. This experiment measures the density and specific heat of nanofluid with weight

Tun-Ping Teng; Yi-Hsuan Hung

2012-01-01

405

Density estimation and survey validation for swift fox Vulpes velox in Oklahoma

The swift fox Vulpes velox Say, 1823, a small canid native to shortgrass prairie ecosystems of North America, has been the subject of enhanced conservation\\u000a and research interest because of restricted distribution and low densities. Previous studies have described distributions\\u000a of the species in the southern Great Plains, but data on density are required to evaluate indices of relative abundance

Marc A. Criffield; Eric C. Hellgren; David M. LESLIE Jr

2010-01-01

406

Population Indices Versus Correlated Density Estimates of Black-Footed Ferret Abundance

Estimating abundance of carnivore populations is problematic because individuals typically are elusive, nocturnal, and dispersed across the landscape. Rare or endangered carnivore populations are even more difficult to estimate because of small sample sizes. Considering behavioral ecology of the target species can drastically improve survey efficiency and effectiveness. Previously, abundance of the black-footed ferret (Mustela nigripes) was monitored by spotlighting

Martin B. Grenier; Steven W. Buskirk; Richard Anderson-Sprecher

2009-01-01

407

Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853

Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

2010-09-01

408

This paper studies the time-dependent power spectral density (PSD) estimation of nonstationary surface electromyography (SEMG) signals and its application to fatigue analysis during isometric muscle contraction. The conventional time-dependent PSD estimation methods exhibit large variabilities in estimating the instantaneous SEMG parameters so that they often fail to identify the changing patterns of short-period SEMG signals and gauge the extent of fatigue in specific muscle groups. To address this problem, a time-varying autoregressive (TVAR) model is proposed in this paper to describe the SEMG signal, and then the recursive least-squares (RLS) and basis function expansion (BFE) methods are used to estimate the model coefficients and the time-dependent PSD. The instantaneous parameters extracted from the PSD estimation are evaluated and compared in terms of reliability, accuracy, and complexity. Experimental results on synthesized and real SEMG data show that the proposed TVAR-model-based PSD estimators can achieve more stable and precise instantaneous parameter estimation than conventional methods. PMID:19027325

Zhang, Z G; Liu, H T; Chan, S C; Luk, K D K; Hu, Y

2010-02-01

409

NASA Technical Reports Server (NTRS)

Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.

Jasinski, Michael F.; Crago, Richard

1994-01-01

410

Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.

Rivera-Milan, F. F.; Collazo, J. A.; Stahala, C.; Moore, W. J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J. L.; Bracey, W.

2005-01-01

411

Individual hydrogen bond (HB) energies have been estimated in several systems involving multiple HBs such as adenine–thymine and guanine–cytosine using electron charge densities calculated at X?H hydrogen bond critical points (HBCPs) by atoms in molecules (AIM) method at B3LYP\\/6-311++G?? and MP2\\/6-311++G?? levels. A symmetrical system with two identical H bonds has been selected to search for simple relations between ?HBCP

A. Ebrahimi; S. M. Habibi Khorassani; H. Delarami

2009-01-01

412

Modeled salt density for nuclear material estimation in the treatment of spent nuclear fuel

NASA Astrophysics Data System (ADS)

Spent metallic nuclear fuel is being treated in a pyrometallurgical process that includes electrorefining the uranium metal in molten eutectic LiCl-KCl as the supporting electrolyte. We report a model for determining the density of the molten salt. Material balances account for the net mass of salt and for the mass of actinides present. It was necessary to know the molten salt density, but difficult to measure. It was also decided to model the salt density for the initial treatment operations. The model assumes that volumes are additive for the ideal molten salt solution as a starting point; subsequently, a correction factor for the lanthanides and actinides was developed. After applying the correction factor, the percent difference between the net salt mass in the electrorefiner and the resulting modeled salt mass decreased from more than 4.0% to approximately 0.1%. As a result, there is no need to measure the salt density at 500 °C for inventory operations; the model for the salt density is found to be accurate.

Mariani, Robert D.; Vaden, DeeEarl

2010-09-01

413

This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

Bhattacharya, Abhishek; Dunson, David B.

2012-01-01

414

Estimating density of ship rats in New Zealand forests by capture- mark-recapture trapping

We developed a capture-mark-recapture protocol for measuring the population density (D) of ship rats (Rattus rattus) in forest. Either mesh cage traps or Elliott box traps were set at each of six sites (48 traps per site for 5 nights) in the Orongorongo Valley on two occasions in autumn 2003. Cage traps only were set at three sites in autumn

Deborah J. Wilson; Murray G. Efford; Samantha J. Brown; John F. Williamson; Gary J. McElrea

2007-01-01

415

WITH MIXTURE DENSITY HMMS Mikko Kurimo and Panu Somervuo Helsinki University of Technology, Neural Networks) algorithm. The advantage of using the SOM is based on the created approximative topology between the mixture. The topology makes the neighboring mixtures to respond strongly for the same inputs and so most of the nearest

Kurimo, Mikko

416

PELLET COUNT INDICES COMPARED TO MARK-RECAPTURE ESTIMATES FOR EVALUATING SNOWSHOE HARE DENSITY

Snowshoe hares (Lepus americanus) undergo remarkable cycles and are the primary prey base of Canada lynx (Lynx canadensis), a carnivore recently listed as threatened in the contiguous United States. Efforts to evalu- ate hare densities using pellets have traditionally been based on regression equations developed in the Yukon, Canada. In western Montana, we evaluated whether or not local regression equations

L. SCOTT MILLS; KAREN E. HODGES

417

Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variabledensity transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

Dafflon, Baptisite; Barrash, Warren; Cardiff, Michael A.; Johnson, Timothy C.

2011-12-15

418

NASA Astrophysics Data System (ADS)

Despite recent advances in the development of satellite sensors for monitoring precipitation at high spatial and temporal resolutions, the assessment of rainfall climatology still relies strongly on ground-station measurements. The Global Historical Climatology Network (GHCN) is one of the most popular stations database available for the international community. Nevertheless, the spatial distribution of these stations is not always homogeneous and the record length largely varies for each station. This study aimed to evaluate how the number of years recorded in the GHCN stations and the density of the network affect the uncertainties of annual rainfall climatology estimates in Latin America. The method applied was divided in two phases. In the first phase, Monte Carlo simulations were performed to evaluate how the number of samples and the characteristics of rainfall regime affect estimates of annual average rainfall. The simulations were performed using gamma distributions with pre-defined parameters, which generated synthetic annual precipitation records. The average and dispersion of the synthetic records were then estimated through the L-moments approach and compared with the original probability distribution that was used to produce the samples. The number of records (n) used in the simulation varied from 10 to 150, reproducing the range of number of years typically found in meteorological stations. A power function, in the form RMSE= f(n) = c.na, where the coefficients were defined as a function of the rainfall statistical dispersion, was applied to fit the errors. In the second phase of the assessment, the results of the simulations were extrapolated to real records obtained by the GHCN over Latin America, creating estimates of errors associated with number of records and rainfall characteristics in each station. To generate a spatially-explicit representation of the uncertainties, the errors in each station were interpolated using the inverse distance weighting method. Furthermore, the effect of the density of stations was also considered by penalizing the interpolated errors proportionally to the station density in the site. The results showed a large discrepancy on rainfall estimate uncertainties among Latin American countries. The uncertainties varied from less than 2% in the Southeastern region of Brazil, to around 40% in regions with low stations density and short time-series at Southern Peru. Therefore, the results highlight the importance of international cooperation for climate data sharing among Latin American countries. In this context, projects aiming at improving scientific cooperation and fostering information based policy such as EUROCLIMA and RALCEA, funded by the European Commission, offer an important opportunity for reducing uncertainties on estimates of climate variables in Latin America.

Maeda, E.; Arevalo, J.; Carmona-Moreno, C.

2012-04-01

419

The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375

Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris

2010-01-01

420

Sums of random variables appear frequently in several areas of the pure and applied sciences. When the variables are independent the sum density is the convolution of individual density functions. Convolution is almost always computationally intensive. We examine here the point estimation of i.i.d. sum densities and introduce the idea of an importance sampling convolver. This motivates an approximate analytical

Rajan Srinivasan

1998-01-01

421

NASA Astrophysics Data System (ADS)

Cold rolling (CR) leads to a heavy changes in the crystallographic texture and microstructure, especially crystal defects, such as dislocations, and stacking faults increase. The microstructure evolution in commercially pure titanium (cp-Ti) deformed by CR at the room temperature was determined by using the synchrotron peak profile analysis of full width at half maximum (FWHM). The computer program ANIZC has been used for the calculation of diffraction contrast factors of dislocations in elastically anisotropic hexagonal crystals. The dislocation density has a minimum value at 40 pct reduction. The increase of the dislocation density at higher deformation levels is caused by the nucleation of new generation of dislocations from the crystallite grain boundaries. The high-cycle fatigue strength (HCF) has a maximum value at 80 pct reduction and it has a minimum value at 40 pct reduction in the commercially pure titanium.

ALkhazraji, Hasan; Salih, Mohammed Z.; Zhong, Zhengye; Mhaede, Mansour; Brokmeier, Hans-Günter; Wagner, Lothar; Schell, N.

2014-08-01

422

NASA Astrophysics Data System (ADS)

A sunlit conductive spacecraft, immersed in tenuous plasma, will attain a positive potential relative to the ambient plasma. This potential is primarily governed by solar irradiation, which causes escape of photoelectrons from the surface of the spacecraft, and the electrons in the ambient plasma providing the return current. In this paper we combine potential measurements from the Cluster satellites with measurements of extreme ultraviolet radiation from the TIMED satellite to establish a relation between solar radiation and spacecraft charging from solar maximum to solar minimum. We then use this relation to derive an improved method for determination of the current balance of the spacecraft. By calibration with other instruments we thereafter derive the plasma density. The results show that this method can provide information about plasma densities in the polar cap and magnetotail lobe regions where other measurements have limitations.

Lybekk, B.; Pedersen, A.; Haaland, S.; Svenes, K.; Fazakerley, A. N.; Masson, A.; Taylor, M. G. G. T.; Trotignon, J.-G.

2012-01-01

423

Estimating the effective density of engineered nanomaterials for in vitro dosimetry

NASA Astrophysics Data System (ADS)

The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro.

Deloid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

2014-03-01

424

PELLET COUNT INDICES COMPARED TO MARK–RECAPTURE ESTIMATES FOR EVALUATING SNOWSHOE HARE DENSITY

Abstract: Snowshoe,hares (Lepus americanus) undergo remarkable,cycles and are the primary,prey base of Canada lynx (Lynx canadensis), a carnivore recently listed as threatened in the contiguous United States. Efforts to evalu- ate hare densities using pellets have traditionally been based on regression equations developed in the Yukon, Canada. In western Montana, we evaluated whether or not local regression equations performed better

L. SCOTT MILLS; PAUL C. GRIFFIN; KAREN E. HODGES; KEVIN McKELVEY; LEN RUGGIERO; TODD ULIZIO