Wavelet-based density estimation for noise reduction in plasma simulations using particles
Nguyen van yen, Romain; Del-Castillo-Negrete, Diego B; Schneider, Kai; Farge, Marie; Chen, Guangye
2010-01-01
For given computational resources, one of the main limitations in the accuracy of plasma simulations using particles comes from the noise due to limited statistical sampling in the reconstruction of the particle distribution function. A method based on wavelet multiresolution analysis is proposed and tested to reduce this noise. The method, known as wavelet based density estimation (WBDE), was previously introduced in the statistical literature to estimate probability densities given a nite number of independent measurements. Its novel application to plasma simulations can be viewed as a natural extension of the nite size particles (FSP) approach, with the advantage of estimating more accurately distribution functions that have localized sharp features. The proposed method preserves the moments of the particle distribution function to a good level of accuracy, has no constraints on the dimensionality of the system, does not require an a priori selection of a global smoothing scale, and its able to adapt locally to the smoothness of the density based on the given discrete particle data. Most importantly, the computational cost of the denoising stage is of the same order as one timestep of a FSP simulation. The method is compared with a recently proposed proper orthogonal decomposition based method, and it is tested with particle data corresponding to strongly collisional, weakly collisional, and collisionless plasmas simulations.
Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets
NASA Astrophysics Data System (ADS)
Cifter, Atilla
2011-06-01
This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.
Wavelet-based Poisson rate estimation using the Skellam distribution
NASA Astrophysics Data System (ADS)
Hirakawa, Keigo; Baqai, Farhan; Wolfe, Patrick J.
2009-02-01
Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements often exhibit heteroscedastic behavior. In particular, time series components and other measurements may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation method performed in certain wavelet/filterbank transform domains. This indicates a promising potential approach for denoising of Poisson counts in the above-mentioned applications.
Estimation of Modal Parameters Using a Wavelet-Based Approach
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty; Haley, Sidney M.
1997-01-01
Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.
Metwally, Khaled; Lefevre, Emmanuelle; Baron, Cécile; Zheng, Rui; Pithioux, Martine; Lasaygues, Philippe
2016-02-01
When assessing ultrasonic measurements of material parameters, the signal processing is an important part of the inverse problem. Measurements of thickness, ultrasonic wave velocity and mass density are required for such assessments. This study investigates the feasibility and the robustness of a wavelet-based processing (WBP) method based on a Jaffard-Meyer algorithm for calculating these parameters simultaneously and independently, using one single ultrasonic signal in the reflection mode. The appropriate transmitted incident wave, correlated with the mathematical properties of the wavelet decomposition, was determined using a adapted identification procedure to build a mathematically equivalent model for the electro-acoustic system. The method was tested on three groups of samples (polyurethane resin, bone and wood) using one 1-MHz transducer. For thickness and velocity measurements, the WBP method gave a relative error lower than 1.5%. The relative errors in the mass density measurements ranged between 0.70% and 2.59%. Despite discrepancies between manufactured and biological samples, the results obtained on the three groups of samples using the WBP method in the reflection mode were remarkably consistent, indicating that it is a reliable and efficient means of simultaneously assessing the thickness and the velocity of the ultrasonic wave propagating in the medium, and the apparent mass density of material. PMID:26403278
Wavelet-based time-delay estimation for time-resolved turbulent flow analysis
Jakubowski, M.; Fonck, R. J.; Fenzi, C.; McKee, G. R.
2001-01-01
A wavelet-transform-based spectral analysis is examined for application to beam emission spectroscopy (BES) data to extract poloidal rotation velocity fluctuations from the density turbulence data. Frequency transfer functions for a wavelet cross-phase extraction method are calculated. Numerical noise is reduced by shifting the data to give an average zero time delay, and the applicable frequency range is extended by numerical oversampling of the measured density fluctuations. This approach offers potential for direct measurements of turbulent transport and detection of zonal flows in tokamak plasma turbulence.
Estimation of interferogram aberration coefficients using wavelet bases and Zernike polynomials
NASA Astrophysics Data System (ADS)
Elias-Juarez, Alfredo A.; Razo-Razo, Noe; Torres-Cisneros, Miguel
2001-12-01
This paper combines the use of wavelet decompositions and Zernike polynomial approximations to extract aberration coefficients associated to an interferogram. Zernike polynomials are well known to represent aberration components of a wave-front. Polynomial approximation properties on a discrete mesh after an orthogonalization process via Gram-Schmidt decompositions are very useful to straightforward estimate aberration coefficients. It is shown that decomposition of interferograms into wavelet domains can reduce the number of computations without a significant effect on the estimated aberration coefficients amplitudes if full size interferograms were considered. Haar wavelets because of their non-overlapping and time localization properties appear to be well suited for this application. Aberration coefficients can be computed from multi resolution decompositions schemes and 2-D Zernike polynomial approximations on coarser scales, providing the means to reduce computational complexity on such calculations.
Airborne Crowd Density Estimation
NASA Astrophysics Data System (ADS)
Meynberg, O.; Kuschk, G.
2013-10-01
This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.
Carter, Joshua A.; Winn, Joshua N. E-mail: jwinn@mit.ed
2009-10-10
We consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise. The noise is represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time, and another that has a power spectral density varying as 1/f{sup g}amma. We present an accurate and fast [O(N)] algorithm for parameter estimation based on computing the likelihood in a wavelet basis. The method is illustrated and tested using simulated time-series photometry of exoplanetary transits, with particular attention to estimating the mid-transit time. We compare our method to two other methods that have been used in the literature, the time-averaging method and the residual-permutation method. For noise processes that obey our assumptions, the algorithm presented here gives more accurate results for mid-transit times and truer estimates of their uncertainties.
Wavelet-based polarimetry analysis
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik
2014-06-01
Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0, 45, 90, 135 right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.
NASA Technical Reports Server (NTRS)
Jameson, Leland
1996-01-01
Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.
Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching
Peter, Adrian M.; Rangarajan, Anand
2010-01-01
Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density p, allowing us to obtain the natural non-negative density representation (p)2. Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. PMID:18390355
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Wavelet-based modal analysis for time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, K.; Staszewski, W. J.; Uhl, T.
2015-01-01
The paper presents algorithms for modal identification of time-variant systems. These algorithms utilise the wavelet-based Frequency Response Function, and lead to estimation of all three modal parameters, i.e. natural frequencies, damping and mode shapes. The method presented utilises random impact excitation and signal post-processing based on the crazy climbers Algorithm. The method is validated using simulated and experimental data from vibration time-variant systems. The results show that the method captures correctly the dynamics of the analysed systems, leading to correct modal parameter identification.
Wavelet based recognition for pulsar signals
NASA Astrophysics Data System (ADS)
Shan, H.; Wang, X.; Chen, X.; Yuan, J.; Nie, J.; Zhang, H.; Liu, N.; Wang, N.
2015-06-01
A signal from a pulsar can be decomposed into a set of features. This set is a unique signature for a given pulsar. It can be used to decide whether a pulsar is newly discovered or not. Features can be constructed from coefficients of a wavelet decomposition. Two types of wavelet based pulsar features are proposed. The energy based features reflect the multiscale distribution of the energy of coefficients. The singularity based features first classify the signals into a class with one peak and a class with two peaks by exploring the number of the straight wavelet modulus maxima lines perpendicular to the abscissa, and then implement further classification according to the features of skewness and kurtosis. Experimental results show that the wavelet based features can gain comparatively better performance over the shape parameter based features not only in the clustering and classification, but also in the error rates of the recognition tasks.
Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach
NASA Astrophysics Data System (ADS)
Aloui, Chaker; Jammazi, Rania
2015-10-01
In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.
Wavelet-based LASSO in functional linear regression
Zhao, Yihong; Ogden, R. Todd; Reiss, Philip T.
2011-01-01
In linear regression with functional predictors and scalar responses, it may be advantageous, particularly if the function is thought to contain features at many scales, to restrict the coefficient function to the span of a wavelet basis, thereby converting the problem into one of variable selection. If the coefficient function is sparsely represented in the wavelet domain, we may employ the well-known LASSO to select a relatively small number of nonzero wavelet coefficients. This is a natural approach to take but to date, the properties of such an estimator have not been studied. In this paper we describe the wavelet-based LASSO approach to regressing scalars on functions and investigate both its asymptotic convergence and its finite-sample performance through both simulation and real-data application. We compare the performance of this approach with existing methods and find that the wavelet-based LASSO performs relatively well, particularly when the true coefficient function is spiky. Source code to implement the method and data sets used in the study are provided as supplemental materials available online. PMID:23794794
Wavelet-based digital image watermarking.
Wang, H J; Su, P C; Kuo, C C
1998-12-01
A wavelet-based watermark casting scheme and a blind watermark retrieval technique are investigated in this research. An adaptive watermark casting method is developed to first determine significant wavelet subbands and then select a couple of significant wavelet coefficients in these subbands to embed watermarks. A blind watermark retrieval technique that can detect the embedded watermark without the help from the original image is proposed. Experimental results show that the embedded watermark is robust against various signal processing and compression attacks. PMID:19384400
Image denoising via Bayesian estimation of local variance with Maxwell density prior
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-10-01
The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.
Wavelet-based ultrasound image denoising: performance analysis and comparison.
Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K
2011-01-01
Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196
Wavelet-Based Image Deconvolution for Wide Field CCD Imagery
NASA Astrophysics Data System (ADS)
Merino, M. T.; Fors, O.; Cardinal, R.; Otazu, X.; Nez, J.; Hildebrand, A. R.
2006-07-01
We show how a wavelet-based image adaptive deconvolution algorithm can provide significant improvements in the analysis of wide-field CCD images. To illustrate it, we apply our deconvolution protocol to a set of images from a Baker-Nunn telescope. This f/1 instrument has an outstanding field of view of 4.4x4.4 with high optical quality offering unique properties to study our deconvolution process and results. In particular, we obtain an estimated gain in limiting magnitude of ?R0.6 mag and in limiting resolution of ??3.9 arcsec. These results increase the number of targets and the efficiency of the underlying scientific project.
Wavelet-based acoustic recognition of aircraft
Dress, W.B.; Kercel, S.W.
1994-09-01
We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.
Varying kernel density estimation on ℝ+
Mnatsakanov, Robert; Sarkisian, Khachatur
2015-01-01
In this article a new nonparametric density estimator based on the sequence of asymmetric kernels is proposed. This method is natural when estimating an unknown density function of a positive random variable. The rates of Mean Squared Error, Mean Integrated Squared Error, and the L1-consistency are investigated. Simulation studies are conducted to compare a new estimator and its modified version with traditional kernel density construction. PMID:26740729
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-03-01
The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.
Wavelet-based analysis of circadian behavioral rhythms.
Leise, Tanya L
2015-01-01
The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453
Sparse Density Estimation on the Multinomial Manifold.
Hong, Xia; Gao, Junbin; Chen, Sheng; Zia, Tanveer
2015-11-01
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators. PMID:25647665
A wavelet based investigation of long memory in stock returns
NASA Astrophysics Data System (ADS)
Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.
2012-04-01
Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.
ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS
An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...
Density estimation by maximum quantum entropy
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-11-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets.
Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints
I. W. Ginsberg
1999-09-01
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.
3D Wavelet-Based Filter and Method
Moss, William C.; Haase, Sebastian; Sedat, John W.
2008-08-12
A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.
Estimating animal population density using passive acoustics.
Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L
2013-05-01
Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144
Estimating animal population density using passive acoustics
Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L
2013-01-01
Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144
Biorthogonal wavelet-based method of moments for electromagnetic scattering
NASA Astrophysics Data System (ADS)
Zhang, Qinke
Wavelet analysis is a technique developed in recent years in mathematics and has found usefulness in signal processing and many other engineering areas. The practical use of wavelets for the solution of partial differential and integral equations in computational electromagnetics has been investigated in this dissertation, with the emphasis on development of biorthogonal wavelet based method of moments for the solution of electric and magnetic field integral equations. The fundamentals and numerical analysis aspects of wavelet theory have been studied. In particular, a family of compactly supported biorthogonal spline wavelet bases on the n-cube (0,1) n has been studied in detail. The wavelet bases were used in this work as a building block to construct biorthogonal wavelet bases on general domain geometry. A specific and practical way of adapting the wavelet bases to certain n- dimensional blocks or elements is proposed based on the domain decomposition and local transformation techniques used in traditional finite element methods and computer aided graphics. The element, with the biorthogonal wavelet base embedded in it, is called a wavelet element in this work. The physical domains which can be treated with this method include general curves, surfaces in 2D and 3D, and 3D volume domains. A two-step mapping is proposed for the purpose of taking full advantage of the zero-moments of wavelets. The wavelet element approach appears to offer several important advantages. It avoids the need of generating very complicated meshes required in traditional finite element based methods, and makes the adaptive analysis easy to implement. A specific implementation procedure for performing adaptive analysis is proposed. The proposed biorthogonal wavelet based method of moments (BWMoM) has been implemented by using object-oriented programming techniques. The main computational issues have been detailed, discussed, and implemented in the whole package. Numerical examples show that sparse matrices can generally be expected when using wavelets as both trial and test bases in solving integral equations. This feature makes the wavelet method particularly suitable for solving large scale problems. It is also shown that adaptive analysis can be easily and suitably implemented in the wavelet framework.
Density Estimation for Projected Exoplanet Quantities
NASA Astrophysics Data System (ADS)
Brown, Robert A.
2011-05-01
Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of "projected" mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth ?. We calibrate the dependence of ? on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n -log 2. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirementsparticularly on the minimum sample size nfor future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.
Stochastic model for estimation of environmental density
Janardan, K.G.; Uppuluri, V.R.R.
1984-01-01
The environment density has been defined as the value of a habitat expressing its unfavorableness for settling of an individual which has a strong anti-social tendency to other individuals in an environment. Morisita studied anti-social behavior of ant-lions (Glemuroides japanicus) and provided a recurrence relation without an explicit solution for the probability distribution of individuals settling in each of two habitats in terms of the environmental densities and the numbers of individuals introduced. In this paper the recurrence relation is explicitly solved; certain interesting properties of the distribution are discussed including the estimation of the parameters. 4 references, 1 table.
Coding sequence density estimation via topological pressure.
Koslicki, David; Thompson, Daniel J
2015-01-01
We give a new approach to coding sequence (CDS) density estimation in genomic analysis based on the topological pressure, which we develop from a well known concept in ergodic theory. Topological pressure measures the 'weighted information content' of a finite word, and incorporates 64 parameters which can be interpreted as a choice of weight for each nucleotide triplet. We train the parameters so that the topological pressure fits the observed coding sequence density on the human genome, and use this to give ab initio predictions of CDS density over windows of size around 66,000 bp on the genomes of Mus Musculus, Rhesus Macaque and Drososphilia Melanogaster. While the differences between these genomes are too great to expect that training on the human genome could predict, for example, the exact locations of genes, we demonstrate that our method gives reasonable estimates for the 'coarse scale' problem of predicting CDS density. Inspired again by ergodic theory, the weightings of the nucleotide triplets obtained from our training procedure are used to define a probability distribution on finite sequences, which can be used to distinguish between intron and exon sequences from the human genome of lengths between 750 and 5,000 bp. At the end of the paper, we explain the theoretical underpinning for our approach, which is the theory of Thermodynamic Formalism from the dynamical systems literature. Mathematica and MATLAB implementations of our method are available at http://sourceforge.net/projects/topologicalpres/ . PMID:24448658
Density Estimation Framework for Model Error Assessment
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.
2014-12-01
In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.
Bird population density estimated from acoustic signals
Dawson, D.K.; Efford, M.G.
2009-01-01
Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.
Analysis of a wavelet-based robust hash algorithm
NASA Astrophysics Data System (ADS)
Meixner, Albert; Uhl, Andreas
2004-06-01
This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
A Wavelet-Based Assessment of Topographic-Isostatic Reductions for GOCE Gravity Gradients
NASA Astrophysics Data System (ADS)
Grombein, Thomas; Luo, Xiaoguang; Seitz, Kurt; Heck, Bernhard
2014-07-01
Gravity gradient measurements from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE) contain significant high- and mid-frequency signal components, which are primarily caused by the attraction of the Earth's topographic and isostatic masses. In order to mitigate the resulting numerical instability of a harmonic downward continuation, the observed gradients can be smoothed with respect to topographic-isostatic effects using a remove-compute-restore technique. For this reason, topographic-isostatic reductions are calculated by forward modeling that employs the advanced Rock-Water-Ice methodology. The basis of this approach is a three-layer decomposition of the topography with variable density values and a modified Airy-Heiskanen isostatic concept incorporating a depth model of the Mohorovi?i? discontinuity. Moreover, tesseroid bodies are utilized for mass discretization and arranged on an ellipsoidal reference surface. To evaluate the degree of smoothing via topographic-isostatic reduction of GOCE gravity gradients, a wavelet-based assessment is presented in this paper and compared with statistical inferences in the space domain. Using the Morlet wavelet, continuous wavelet transforms are applied to measured GOCE gravity gradients before and after reducing topographic-isostatic signals. By analyzing a representative data set in the Himalayan region, an employment of the reductions leads to significantly smoothed gradients. In addition, smoothing effects that are invisible in the space domain can be detected in wavelet scalograms, making a wavelet-based spectral analysis a powerful tool.
Paul, Sabyasachi; Sarkar, P K
2013-04-01
Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results. PMID:22887117
Traffic characterization and modeling of wavelet-based VBR encoded video
Yu Kuo; Jabbari, B.; Zafar, S.
1997-07-01
Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.
Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms
NASA Astrophysics Data System (ADS)
Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.
2013-02-01
The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.
Characterizing cerebrovascular dynamics with the wavelet-based multifractal formalism
NASA Astrophysics Data System (ADS)
Pavlov, A. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Sindeev, S. S.; Pavlova, O. N.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.
2016-01-01
Using the wavelet-transform modulus maxima (WTMM) approach we study the dynamics of cerebral blood flow (CBF) in rats aiming to reveal responses of macro- and microcerebral circulations to changes in the peripheral blood pressure. We show that the wavelet-based multifractal formalism allows quantifying essentially different reactions in the CBF-dynamics at the level of large and small cerebral vessels. We conclude that unlike the macrocirculation that is nearly insensitive to increased peripheral blood pressure, the microcirculation is characterized by essential changes of the CBF-complexity.
Template-free wavelet-based detection of local symmetries.
Puspoki, Zsuzsanna; Unser, Michael
2015-10-01
Our goal is to detect and group different kinds of local symmetries in images in a scale- and rotation-invariant way. We propose an efficient wavelet-based method to determine the order of local symmetry at each location. Our algorithm relies on circular harmonic wavelets which are used to generate steerable wavelet channels corresponding to different symmetry orders. To give a measure of local symmetry, we use the F-test to examine the distribution of the energy across different channels. We provide experimental results on synthetic images, biological micrographs, and electron-microscopy images to demonstrate the performance of the algorithm. PMID:26011883
ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER
This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...
Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S
2005-04-11
Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.
A New Wavelet Based Approach to Assess Hydrological Models
NASA Astrophysics Data System (ADS)
Adamowski, J. F.; Rathinasamy, M.; Khosa, R.; Nalley, D.
2014-12-01
In this study, a new wavelet based multi-scale performance measure (Multiscale Nash Sutcliffe Criteria, and Multiscale Normalized Root Mean Square Error) for hydrological model comparison was developed and tested. The new measure provides a quantitative measure of model performance across different timescales. Model and observed time series are decomposed using the a trous wavelet transform, and performance measures of the model are obtained at each time scale. The usefulness of the new measure was tested using real as well as synthetic case studies. The real case studies included simulation results from the Soil Water Assessment Tool (SWAT), as well as statistical models (the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods). Data from India and Canada were used. The synthetic case studies included different kinds of errors (e.g., timing error, as well as under and over prediction of high and low flows) in outputs from a hydrologic model. It was found that the proposed wavelet based performance measures (i.e., MNSC and MNRMSE) are a more reliable measure than traditional performance measures such as the Nash Sutcliffe Criteria, Root Mean Square Error, and Normalized Root Mean Square Error. It was shown that the new measure can be used to compare different hydrological models, as well as help in model calibration.
A Wavelet-Based Approach to Fall Detection
Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo
2015-01-01
Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719
A wavelet-based approach to fall detection.
Palmerini, Luca; Bagal, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo
2015-01-01
Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the "prototype fall".In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719
Mammographic Density Estimation with Automated Volumetric Breast Density Measurement
Ko, Su Yeon; Kim, Eun-Kyung; Kim, Min Jung
2014-01-01
Objective To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. Materials and Methods In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. Results The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p = 0.001 to 0.015). Conclusion There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density. PMID:24843235
NASA Astrophysics Data System (ADS)
Martinez-Torres, C.; Arneodo, A.; Streppa, L.; Argoul, P.; Argoul, F.
2016-01-01
Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale method to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.
Toward Estimating Current Densities in Magnetohydrodynamic Generators
NASA Astrophysics Data System (ADS)
Bokil, V. A.; Gibson, N. L.; McGregor, D. A.; Woodside, C. R.
2015-09-01
We investigate the idea of reconstructing current densities in a magnetohydrodynamic (MHD) generator channel from external magnetic flux density measurements in order to determine the existence and location of damaging arcs. We model the induced fields, which are usually neglected in low magnetic Reynold's number flows, using a natural fixed point iteration. Further we present a sensitivity analysis of induced fields to current density profiles in a 3D, yet simplified model.
A wavelet based technique for suppression of EMG noise and motion artifact in ambulatory ECG.
Mithun, P; Pandey, Prem C; Sebastian, Toney; Mishra, Prashant; Pandey, Vinod K
2011-01-01
A wavelet-based denoising technique is investigated for suppressing EMG noise and motion artifact in ambulatory ECG. EMG noise is reduced by thresholding the wavelet coefficients using an improved thresholding function combining the features of hard and soft thresholding. Motion artifact is reduced by limiting the wavelet coefficients. Thresholds for both the denoising steps are estimated using the statistics of the noisy signal. Denoising of simulated noisy ECG signals resulted in an average SNR improvement of 11.4 dB, and its application on ambulatory ECG recordings resulted in L(2) norm and max-min based improvement indices close to one. It significantly improved R-peak detection in both the cases. PMID:22255971
Concrete density estimation by rebound hammer method
NASA Astrophysics Data System (ADS)
Ismail, Mohamad Pauzi bin; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin
2016-01-01
Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite.
A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring
Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian
2007-01-01
Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.
Dynamic wavelet-based tool for gearbox diagnosis
NASA Astrophysics Data System (ADS)
Omar, Farag K.; Gaouda, A. M.
2012-01-01
This paper proposes a novel wavelet-based technique for detecting and localizing gear tooth defects in a noisy environment. The proposed technique utilizes a dynamic windowing process while analyzing gearbox vibration signals in the wavelet domain. The gear vibration signal is processed through a dynamic Kaiser's window of varying parameters. The window size, shape, and sliding rate are modified towards increasing the similarity between the non-stationary vibration signal and the selected mother wavelet. The window parameters are continuously modified until they provide maximum wavelet coefficients localized at the defected tooth. The technique is applied on laboratory data corrupted with high noise level. The technique has shown accurate results in detecting and localizing gear tooth fracture with different damage severity.
Wavelet-based image analysis system for soil texture analysis
NASA Astrophysics Data System (ADS)
Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John
2003-05-01
Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.
Wavelet-based multifractal analysis of laser biopsy imagery
NASA Astrophysics Data System (ADS)
Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima
2012-03-01
In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
Estimating the central densities of stellar systems
NASA Astrophysics Data System (ADS)
Merritt, David
1988-02-01
The sensitivity of King's (1966) core-fitting formula to velocity anisotropy is discussed. For stable, spherical models, King's formula can overestimate the central density by at least 50 percent. For nonspherical models, the error can be 150 percent or more. In all cases, the sensitivity of the core-fitting formula to anisotropy can be reduced somewhat if velocity dispersions are averaged over the inner one or two core radii.
Density estimation using the trapping web design: A geometric analysis
Link, W.A.; Barker, R.J.
1994-01-01
Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
An image adaptive, wavelet-based watermarking of digital images
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia
2007-12-01
In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.
A wavelet-based feature vector model for DNA clustering.
Bao, J P; Yuan, R Y
2015-01-01
DNA data are important in the bioinformatic domain. To extract useful information from the enormous collection of DNA sequences, DNA clustering is often adopted to efficiently deal with DNA data. The alignment-free method is a very popular way of creating feature vectors from DNA sequences, which are then used to compare DNA similarities. This paper proposes a wavelet-based feature vector (WFV) model, which is also an alignment-free method. From the perspective of signal processing, a DNA sequence is a sequence of digital signals. However, most traditional alignment-free models only extract features in the time domain. The WFV model uses discrete wavelet transform to adaptively yield feature vectors with a fixed dimension based on the features in both the time and frequency domains. The level of wavelet transform is adjusted according to the length of the DNA sequence rather than a fixed manually set value. The WFV model prefers a 32-dimension feature vector, which greatly promotes system performance. We compared the WFV model with the other five alignment-free models, i.e., k-tuple, DMK, TSM, AMI, and CV, on several large-scale DNA datasets on the DNA clustering application by means of the K-means algorithm. The experimental results showed that the WFV model outperformed the other models in terms of both the clustering results and the running time. PMID:26782569
Image retrieval using wavelet-based salient points
NASA Astrophysics Data System (ADS)
Tian, Qi; Sebe, Nicu; Lew, Michael S.; Loupias, E.; Huang, Thomas S.
2001-10-01
Content-based image retrieval (CBIR) has become one of the most active research areas in the past few years. Most of the attention from the research has been focused on indexing techniques based on global feature distributions. However, these global distributions have limited discriminating power because they are unable to capture local image information. The use of interest points in content-based image retrieval allow image index to represent local properties of the image. Classic corner detectors can be used for this purpose. However, they have drawbacks when applied to various natural images for image retrieval, because visual features need not be corners and corners may gather in small regions. In this paper, we present a salient point detector. The detector is based on wavelet transform to detect global variations as well as local ones. The wavelet-based salient points are evaluated for image retrieval with a retrieval system using color and texture features. The results show that salient points with Gabor feature perform better than the other point detectors from the literature and the randomly chosen points. Significant improvements are achieved in terms of retrieval accuracy, computational complexity when compared to the global feature approaches.
Wavelet-based characterization of gait signal for neurological abnormalities.
Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S
2015-02-01
Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. PMID:25661004
Structural damage localization using wavelet-based silhouette statistics
NASA Astrophysics Data System (ADS)
Jung, Uk; Koh, Bong-Hwan
2009-04-01
This paper introduces a new methodology for classifying and localizing structural damage in a truss structure. The application of wavelet analysis along with signal classification techniques in engineering problems allows us to discover novel characteristics that can be used for the diagnosis and classification of structural defects. This study exploits the data discriminating capability of silhouette statistics, which is eventually combined with the wavelet-based vertical energy threshold technique for the purpose of extracting damage-sensitive features and clustering signals of the same class. This threshold technique allows us to first obtain a suitable subset of the extracted or modified features of our data, i.e. good predictor sets should contain features that are strongly correlated to the characteristics of the data without considering the classification method used, although each of these features should be as uncorrelated with each other as possible. The silhouette statistics have been used to assess the quality of clustering by measuring how well an object is assigned to its corresponding cluster. We use this concept for the discriminant power function used in this paper. The simulation results of damage detection in a truss structure show that the approach proposed in this study can be successfully applied for locating both open- and breathing-type damage even in the presence of a considerable amount of process and measurement noise. Finally, a typical data mining tool such as classification and regression tree (CART) quantitatively evaluates the performance of the damage localization results in terms of the misclassification error.
Wavelet-based face verification for constrained platforms
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2005-03-01
Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.
Coarse-to-fine wavelet-based airport detection
NASA Astrophysics Data System (ADS)
Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun
2015-10-01
Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.
Density Ratio Estimation: A New Versatile Tool for Machine Learning
NASA Astrophysics Data System (ADS)
Sugiyama, Masashi
A new general framework of statistical data processing based on the ratio of probability densities has been proposed recently and gathers a great deal of attention in the machine learning and data mining communities [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. This density ratio framework includes various statistical data processing tasks such as non-stationarity adaptation [18,1,2,4,13], outlier detection [19,20,21,6], and conditional density estimation [22,23,24,15]. Furthermore, mutual information—which plays a central role in information theory [25]—can also be estimated via density ratio estimation. Since mutual information is a measure of statistical independence between random variables [26,27,28], density ratio estimation can be used also for variable selection [29,7,11], dimensionality reduction [30,16], and independent component analysis [31,12].
Efficient backward-propagation using wavelet-based filtering for fiber backward-propagation.
Goldfarb, Gilad; Li, Guifang
2009-05-25
With the goal of reducing the number of operations required for digital backward-propagation used for fiber impairment compensation, wavelet-based filtering is presented. The wavelet-based design relies on signal decomposition using time-limited basis functions and hence is more compatible with the dispersion operator, which is also time-limited. This is in comparison with inverse-Fourier filter design which by definition is not time-limited due to the use of harmonic basis functions for signal decomposition. Artificial, after-the-fact windowing may be employed in this case; however only a limited amount of saving in the number of operations can be achieved, compared to the wavelets-base filter design. Wavelet-based filter design procedure and numerical simulations which validate this approach are presented in this paper. PMID:19466131
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Evaluation of wolf density estimation from radiotelemetry data
Burch, J.W.; Adams, L.G.; Follmann, E.H.; Rexstad, E.A.
2005-01-01
Density estimation of wolves (Canis lupus) requires a count of individuals and an estimate of the area those individuals inhabit. With radiomarked wolves, the count is straightforward but estimation of the area is more difficult and often given inadequate attention. The population area, based on the mosaic of pack territories, is influenced by sampling intensity similar to the estimation of individual home ranges. If sampling intensity is low, population area will be underestimated and wolf density will be inflated. Using data from studies in Denali National Park and Preserve, Alaska, we investigated these relationships using Monte Carlo simulation to evaluate effects of radiolocation effort and number of marked packs on density estimation. As the number of adjoining pack home ranges increased, fewer relocations were necessary to define a given percentage of population area. We present recommendations for monitoring wolves via radiotelemetry.
Wavelet-based AR-SVM for health monitoring of smart structures
NASA Astrophysics Data System (ADS)
Kim, Yeesock; Chong, Jo Woon; Chon, Ki H.; Kim, JungMi
2013-01-01
This paper proposes a novel structural health monitoring framework for damage detection of smart structures. The framework is developed through the integration of the discrete wavelet transform, an autoregressive (AR) model, damage-sensitive features, and a support vector machine (SVM). The steps of the method are the following: (1) the wavelet-based AR (WAR) model estimates vibration signals obtained from both the undamaged and damaged smart structures under a variety of random signals; (2) a new damage-sensitive feature is formulated in terms of the AR parameters estimated from the structural velocity responses; and then (3) the SVM is applied to each group of damaged and undamaged data sets in order to optimally separate them into either damaged or healthy groups. To demonstrate the effectiveness of the proposed structural health monitoring framework, a three-story smart building equipped with a magnetorheological (MR) damper under artificial earthquake signals is studied. It is shown from the simulation that the proposed health monitoring scheme is effective in detecting damage of the smart structures in an efficient way.
Estimating maritime snow density from seasonal climate variables
NASA Astrophysics Data System (ADS)
Bormann, K. J.; Evans, J. P.; Westra, S.; McCabe, M. F.; Painter, T. H.
2013-12-01
Snow density is a complex parameter that influences thermal, optical and mechanical snow properties and processes. Depth-integrated properties of snowpacks, including snow density, remain very difficult to obtain remotely. Observations of snow density are therefore limited to in-situ point locations. In maritime snowfields such as those in Australia and in parts of the western US, snow densification rates are enhanced and inter-annual variability is high compared to continental snow regions. In-situ snow observation networks in maritime climates often cannot characterise the variability in snowpack properties at spatial and temporal resolutions required for many modelling and observations-based applications. Regionalised density-time curves are commonly used to approximate snow densities over broad areas. However, these relationships have limited spatial applicability and do not allow for interannual variability in densification rates, which are important in maritime environments. Physically-based density models are relatively complex and rely on empirical algorithms derived from limited observations, which may not represent the variability observed in maritime snow. In this study, seasonal climate factors were used to estimate late season snow densities using multiple linear regressions. Daily snow density estimates were then obtained by projecting linearly to fresh snow densities at the start of the season. When applied spatially, the daily snow density fields compare well to in-situ observations across multiple sites in Australia, and provide a new method for extrapolating existing snow density datasets in maritime snow environments. While the relatively simple algorithm for estimating snow densities has been used in this study to constrain snowmelt rates in a temperature-index model, the estimates may also be used to incorporate variability in snow depth to snow water equivalent conversion.
Mean thermospheric density estimation derived from satellite constellations
NASA Astrophysics Data System (ADS)
Li, Alan; Close, Sigrid
2015-10-01
This paper defines a method to estimate the mean neutral density of the thermosphere given many satellites of the same form factor travelling in similar regions of space. A priori information to the estimation scheme include ranging measurements and a general knowledge of the onboard ADACS, although precise measurements are not required for the latter. The estimation procedure seeks to utilize order statistics to estimate the probability of the minimum drag coefficient achievable, and amalgamating all measurements across multiple time periods allows estimation of the probability density of the ballistic factor itself. The model does not depend on prior models of the atmosphere; instead we require estimation of the minimum achievable drag coefficient which is based upon physics models of simple shapes in free molecular flow. From the statistics of the minimum, error statistics on the estimated atmospheric density can be calculated. Barring measurement errors from the ranging procedure itself, it is shown that with a constellation of 10 satellites, we can achieve a standard deviation of roughly 4% on the estimated mean neutral density. As more satellites are added to the constellation, the result converges towards the lower limit of the achievable drag coefficient, and accuracy becomes limited by the quality of the ranging measurements and the probability of the accommodation coefficient. Comparisons are made to existing atmospheric models such as NRLMSISE-00 and JB2006.
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.
2002-01-01
Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.
The estimation of body density in rugby union football players.
Bell, W
1995-03-01
The general regression equation of Durnin and Womersley for estimating body density from skinfold thicknesses in young men, was examined by comparing the estimated density from this equation, with the measured density of a group of 45 rugby union players of similar age. Body density was measured by hydrostatic weighing with simultaneous measurement of residual volume. Additional measurements included stature, body mass and skinfold thicknesses at the biceps, triceps, subscapular and suprailiac sites. The estimated density was significantly different from the measured density (P < 0.001), equivalent to a mean overestimation of relative fat of approximately 4%. A new set of prediction equations for estimating density was formulated from linear regression using the logarithm of single and sums of skinfold thicknesses. Equations were derived from a validation sample (n = 22) and tested on a crossvalidation sample (n = 23). The standard error of the estimate (s.e.e.) of the equations ranged from 0.0058 to 0.0062 g ml-1. The derived equations were successfully crossvalidated. Differences between measured and estimated densities were not significant (P > 0.05), total errors ranging from 0.0067 to 0.0092 g ml-1. An exploratory assessment was also made of the effect of fatness and aerobic fitness on the prediction equations. The equations should be applied to players of similar age and playing ability, and for the purpose of identifying group characteristics. Application of the equations to individuals may give rise to errors of between -3.9% to +2.5% total body fat in two-thirds of cases. PMID:7788218
Ultrasonic velocity for estimating density of structural ceramics
NASA Technical Reports Server (NTRS)
Klima, S. J.; Watson, G. K.; Herbell, T. P.; Moore, T. J.
1981-01-01
The feasibility of using ultrasonic velocity as a measure of bulk density of sintered alpha silicon carbide was investigated. The material studied was either in the as-sintered condition or hot isostatically pressed in the temperature range from 1850 to 2050 C. Densities varied from approximately 2.8 to 3.2 g cu cm. Results show that the bulk, nominal density of structural grade silicon carbide articles can be estimated from ultrasonic velocity measurements to within 1 percent using 20 MHz longitudinal waves and a commercially available ultrasonic time intervalometer. The ultrasonic velocity measurement technique shows promise for screening out material with unacceptably low density levels.
Fast wavelet-based image characterization for highly adaptive image retrieval.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Cochener, Béatrice; Roux, Christian
2012-04-01
Adaptive wavelet-based image characterizations have been proposed in previous works for content-based image retrieval (CBIR) applications. In these applications, the same wavelet basis was used to characterize each query image: This wavelet basis was tuned to maximize the retrieval performance in a training data set. We take it one step further in this paper: A different wavelet basis is used to characterize each query image. A regression function, which is tuned to maximize the retrieval performance in the training data set, is used to estimate the best wavelet filter, i.e., in terms of expected retrieval performance, for each query image. A simple image characterization, which is based on the standardized moments of the wavelet coefficient distributions, is presented. An algorithm is proposed to compute this image characterization almost instantly for every possible separable or nonseparable wavelet filter. Therefore, using a different wavelet basis for each query image does not considerably increase computation times. On the other hand, significant retrieval performance increases were obtained in a medical image data set, a texture data set, a face recognition data set, and an object picture data set. This additional flexibility in wavelet adaptation paves the way to relevance feedback on image characterization itself and not simply on the way image characterizations are combined. PMID:22194244
A Wavelet-Based Noise Reduction Algorithm and Its Clinical Evaluation in Cochlear Implants
Ye, Hua; Deng, Guang; Mauger, Stefan J.; Hersbach, Adam A.; Dawson, Pam W.; Heasman, John M.
2013-01-01
Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise. PMID:24086605
NASA Astrophysics Data System (ADS)
Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong
2015-01-01
The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.
Non-local crime density estimation incorporating housing information
Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.
2014-01-01
Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817
Non-local crime density estimation incorporating housing information.
Woodworth, J T; Mohler, G O; Bertozzi, A L; Brantingham, P J
2014-11-13
Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H(1) Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817
Open-cluster density profiles derived using a kernel estimator
NASA Astrophysics Data System (ADS)
Seleznev, Anton F.
2016-03-01
Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density profile for the cluster core and the uniform sphere for the cluster corona is shown to be a better approximation of the surface radial density profile.The King function alone does not reproduce surface density profiles of sample clusters properly. The number of stars, the cluster masses and the tidal radii in the Galactic gravitational field for the sample clusters are estimated. It is shown that NGC 6819 and 6939 are extended beyond their tidal surfaces.
Kernel density estimation of a multidimensional efficiency profile
NASA Astrophysics Data System (ADS)
Poluektov, A.
2015-02-01
Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the ?0b ? D0p?- decay.
Density estimation by mixture models with smoothing priors
Utsugi
1998-11-15
In the statistical approach for self-organizing maps (SOMs), learning is regarded as an estimation algorithm for a gaussian mixture model with a gaussian smoothing prior on the centroid parameters. The values of the hyperparameters and the topological structure are selected on the basis of a statistical principle. However, since the component selection probabilities are fixed to a common value, the centroids concentrate on areas with high data density. This deforms a coordinate system on an extracted manifold and makes smoothness evaluation for the manifold inaccurate. In this article, we study an extended SOM model whose component selection probabilities are variable. To stabilize the estimation, a smoothing prior on the component selection probabilities is introduced. An estimation algorithm for the parameters and the hyperparameters based on empirical Bayesian inference is obtained. The performance of density estimation by the new model and the SOM model is compared via simulation experiments. PMID:9804674
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Wavelet-based analogous phase scintillation index for high latitudes
NASA Astrophysics Data System (ADS)
Ahmed, A.; Tiwari, R.; Strangeways, H. J.; Dlay, S.; Johnsen, M. G.
2015-08-01
The Global Positioning System (GPS) performance at high latitudes can be severely affected by the ionospheric scintillation due to the presence of small-scale time-varying electron density irregularities. In this paper, an improved analogous phase scintillation index derived using the wavelet-transform-based filtering technique is presented to represent the effects of scintillation regionally at European high latitudes. The improved analogous phase index is then compared with the original analogous phase index and the phase scintillation index for performance comparison using 1 year of data from Trondheim, Norway (63.41N, 10.4E). This index provides samples at a 1 min rate using raw total electron content (TEC) data at 1 Hz for the prediction of phase scintillation compared to the scintillation monitoring receivers (such as NovAtel Global Navigation Satellite Systems Ionospheric Scintillation and TEC Monitor receivers) which operate at 50 Hz rate and are thus rather computationally intensive. The estimation of phase scintillation effects using high sample rate data makes the improved analogous phase index a suitable candidate which can be used in regional geodetic dual-frequency-based GPS receivers to efficiently update the tracking loop parameters based on tracking jitter variance.
An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments
Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro
2013-01-01
In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054
Yaqub, Maqsood; Boellaard, Ronald; Schuitemaker, Alie; van Berckel, Bart N M; Lammertsmae, Adriaan A
2008-11-01
The purpose of the present study was to investigate the use of various wavelets based techniques for denoising of [11C](R)-PK11195 time activity curves (TACs) in order to improve accuracy and precision of PET kinetic parameters, such as volume of distribution (V(T)) and distribution volume ratio with reference region (DVR). Simulated and clinical TACs were filtered using two different categories of wavelet filters: (1) wave shrinking thresholds using a constant or a newly developed time varying threshold and (2) "statistical" filters, which filter extreme wavelet coefficients using a set of "calibration" TACs. PET pharmacokinetic parameters were estimated using linear models (plasma Logan and reference Logan analyses). For simulated noisy TACs, optimized wavelet based filters improved the residual sum of squared errors with the original noise free TACs. Furthermore, both clinical results and simulations were in agreement. Plasma Logan V(T) values increased after filtering, but no differences were seen in reference Logan DVR values. This increase in plasma Logan V(T) suggests a reduction of noise induced bias by wavelet based denoising, as was seen in the simulations. Wavelet denoising of TACs for [11C](R)-PK11195 PET studies is therefore useful when parametric Logan based V(T) is the parameter of interest. PMID:19070241
Comparison of parzen density and frequency histogram as estimators of probability density functions.
Glavinovi?, M I
1996-01-01
In neurobiology, and in other fields, the frequency histogram is a traditional tool for determining the probability density function (pdf) of random processes, although other methods have been shown to be more efficient as their estimators. In this study, the frequency histogram is compared with the Parzen density estimator, a method that consists of convolving each measurement with a weighting function of choice (Gaussian, rectangular, etc) and using their sum as an estimate of the pdf of the random process. The difference in their performance in evaluating two types of pdfs that occur commonly in quantal analysis (monomodal and multimodal with equidistant peaks) is demonstrated numerically by using the integrated square error criterion and assuming a knowledge of the "true" pdf. The error of the Parzen density estimates decreases faster as a function of the number of observations than that of the frequency histogram, indicating that they are asymptotically more efficient. A variety of "reasonable" weighting functions can provide similarly efficient Parzen density estimates, but their efficiency greatly depends on their width. The optimal widths determined using the integrated square error criterion, the harmonic analysis (applicable only to multimodal pdfs with equidistant peaks), and the "test graphs" (the graphs of the second derivatives of the Parzen density estimates that do not assume a knowledge of the "true" pdf, but depend on the distinction between the "essential features" of the pdf and the "random fluctuations") were compared and found to be similar. PMID:9019720
Face Value: Towards Robust Estimates of Snow Leopard Densities.
Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Face Value: Towards Robust Estimates of Snow Leopard Densities
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved. PMID:22919919
Ionospheric electron density profile estimation using commercial AM broadcast signals
NASA Astrophysics Data System (ADS)
Yu, De; Ma, Hong; Cheng, Li; Li, Yang; Zhang, Yufeng; Chen, Wenjun
2015-08-01
A new method for estimating the bottom electron density profile by using commercial AM broadcast signals as non-cooperative signals is presented in this paper. Without requiring any dedicated transmitters, the required input data are the measured elevation angles of signals transmitted from the known locations of broadcast stations. The input data are inverted for the QPS model parameters depicting the electron density profile of the signal's reflection area by using a probabilistic inversion technique. This method has been validated on synthesized data and used with the real data provided by an HF direction-finding system situated near the city of Wuhan. The estimated parameters obtained by the proposed method have been compared with vertical ionosonde data and have been used to locate the Shijiazhuang broadcast station. The simulation and experimental results indicate that the proposed ionospheric sounding method is feasible for obtaining useful electron density profiles.
Estimating Density Gradients and Drivers from 3D Ionospheric Imaging
NASA Astrophysics Data System (ADS)
Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.
2009-12-01
The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.
Estimation of Enceladus Plume Density Using Cassini Flight Data
NASA Technical Reports Server (NTRS)
Wang, Eric K.; Lee, Allan Y.
2011-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.
Estimating Podocyte Number and Density Using a Single Histologic Section
Venkatareddy, Madhusudan; Wang, Su; Yang, Yan; Patel, Sanjeevkumar; Wickman, Larysa; Nishizono, Ryuzoh; Chowdhury, Mahboob; Hodgin, Jeffrey; Wiggins, Paul A.
2014-01-01
The reduction in podocyte density to levels below a threshold value drives glomerulosclerosis and progression to ESRD. However, technical demands prohibit high-throughput application of conventional morphometry for estimating podocyte density. We evaluated a method for estimating podocyte density using single paraffin-embedded formalin-fixed sections. Podocyte nuclei were imaged using indirect immunofluorescence detection of antibodies against Wilms tumor-1 or transducin-like enhancer of split 4. To account for the large size of podocyte nuclei in relation to section thickness, we derived a correction factor given by the equation CF=1/(D/T+1), where T is the tissue section thickness and D is the mean caliper diameter of podocyte nuclei. Normal values for D were directly measured in thick tissue sections and in 3- to 5-?m sections using calibrated imaging software. D values were larger for human podocyte nuclei than for rat or mouse nuclei (P<0.01). In addition, D did not vary significantly between human kidney biopsies at the time of transplantation, 36 months after transplantation, or with podocyte depletion associated with transplant glomerulopathy. In rat models, D values also did not vary with podocyte depletion, but increased approximately 10% with old age and in postnephrectomy kidney hypertrophy. A spreadsheet with embedded formulas was created to facilitate individualized podocyte density estimation upon input of measured values. The correction factor method was validated by comparison with other methods, and provided data comparable with prior data for normal human kidney transplant donors. This method for estimating podocyte density is applicable to high-throughput laboratory and clinical use. PMID:24357669
Estimating podocyte number and density using a single histologic section.
Venkatareddy, Madhusudan; Wang, Su; Yang, Yan; Patel, Sanjeevkumar; Wickman, Larysa; Nishizono, Ryuzoh; Chowdhury, Mahboob; Hodgin, Jeffrey; Wiggins, Paul A; Wiggins, Roger C
2014-05-01
The reduction in podocyte density to levels below a threshold value drives glomerulosclerosis and progression to ESRD. However, technical demands prohibit high-throughput application of conventional morphometry for estimating podocyte density. We evaluated a method for estimating podocyte density using single paraffin-embedded formalin-fixed sections. Podocyte nuclei were imaged using indirect immunofluorescence detection of antibodies against Wilms' tumor-1 or transducin-like enhancer of split 4. To account for the large size of podocyte nuclei in relation to section thickness, we derived a correction factor given by the equation CF=1/(D/T+1), where T is the tissue section thickness and D is the mean caliper diameter of podocyte nuclei. Normal values for D were directly measured in thick tissue sections and in 3- to 5-?m sections using calibrated imaging software. D values were larger for human podocyte nuclei than for rat or mouse nuclei (P<0.01). In addition, D did not vary significantly between human kidney biopsies at the time of transplantation, 3-6 months after transplantation, or with podocyte depletion associated with transplant glomerulopathy. In rat models, D values also did not vary with podocyte depletion, but increased approximately 10% with old age and in postnephrectomy kidney hypertrophy. A spreadsheet with embedded formulas was created to facilitate individualized podocyte density estimation upon input of measured values. The correction factor method was validated by comparison with other methods, and provided data comparable with prior data for normal human kidney transplant donors. This method for estimating podocyte density is applicable to high-throughput laboratory and clinical use. PMID:24357669
Quantitative volumetric breast density estimation using phase contrast mammography
NASA Astrophysics Data System (ADS)
Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A.; D'Isidoro, Fabio; Stampanoni, Marco
2015-05-01
Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p=0.033 ) and AP view (p=0.001 ). A linear regression between the results of the CC view and AP view showed a correlation coefficient ? = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.
The Effect of Lidar Point Density on LAI Estimation
NASA Astrophysics Data System (ADS)
Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.
2013-12-01
Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful in forest inventory applications and tree health evaluations. However, such high-density data is often not available over large areas. In this study we progressively downsampled the high-density discrete lidar data and evaluated the effect on LAI estimation. The AccuPAR data was used as validation and results were compared to existing LAI metrics. This will enable us to determine the minimum point density required for airborne lidar LAI retrieval. Preliminary results show that the data may be substantially thinned to estimate site-level LAI. More detailed results will be presented at the conference.
Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds
NASA Astrophysics Data System (ADS)
Arshakian, Tigran G.; Ossenkopf, Volker
2016-01-01
Aims: We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well-defined spatial properties, we calibrate the method and test its limitations to apply it ultimately to a set of observed maps. Methods: We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. Results: We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behaviour without prominent scales. Observational noise and a finite map size limit the scales on which the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from observational noise is negligible for signal-to-noise ratios ≳5. We propose an approach for the identification of correlated structures in the maps, which allows us to localize individual correlated structures and recognize their shapes and suggest a recipe for recovering enhanced scales in self-similar structures. The application of the WWCC to the observed line maps of the giant molecular cloud G 333 allows us to add specific scale information to the results obtained earlier using the principal component analysis. The WWCC confirms the chemical and excitation similarity of 13CO and C18O on all scales, but shows a deviation of HCN at scales of up to 7 pc. This can be interpreted as a chemical transition scale. The largest structures also show a systematic offset along the filament, probably due to a large-scale density gradient. Conclusions: The WWCC can compare correlated structures in different maps of molecular clouds identifying scales that represent structural changes, such as chemical and phase transitions and prominent or enhanced dimensions.
Can modeling improve estimation of desert tortoise population densities?
Nussear, K.E.; Tracy, C.R.
2007-01-01
The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.
Analysis of a wavelet-based compression scheme for wireless image communication
NASA Astrophysics Data System (ADS)
Sun, Zhaohui; Luo, Jiebo; Chen, Chang W.; Parker, Kevin J.
1996-03-01
In wireless image communication, image compression is necessary because of the limited channel bandwidth. The associated channel fading, multipath distortion and various channel noises demand that the applicable image compression technique be amenable to noise combating and error correction techniques designed for wireless communication environment. In this study, we adopt a wavelet-based compression scheme for wireless image communication applications. The scheme includes a novel scene adaptive and signal adaptive quantization which results in coherent scene representation. Such representation can be integrated with the inherent layered structure of the wavelet-based approach to provide possibilities for robust protection of bit stream against impulsive and bursty error conditions frequently encountered in wireless communications. To implement the simulation of wireless image communication, we suggest a scheme of error sources modeling based on the analysis of the general characteristics of the wireless channels. This error source model is based on Markov chain process and is used to generate binary bit error patterns to simulate the bursty nature of the wireless channel errors. Once the compressed image bit stream is passed through the simulated channel, errors will occur according to this bit error pattern. Preliminary comparison between JPEG-based wireless image communication and wavelet-based wireless image communication has been made without application of error control and error resilience to either case. The assessment of the performance based on image quality evaluation shows that the wavelet-based approach is promising for wireless communication with the bursty channel characteristics.
Multiresolution analysis on zero-dimensional Abelian groups and wavelets bases
Lukomskii, Sergei F
2010-06-29
For a locally compact zero-dimensional group (G,+{sup .}), we build a multiresolution analysis and put forward an algorithm for constructing orthogonal wavelet bases. A special case is indicated when a wavelet basis is generated from a single function through contractions, translations and exponentiations. Bibliography: 19 titles.
A contact algorithm for density-based load estimation.
Bona, Max A; Martin, Larry D; Fischer, Kenneth J
2006-01-01
An algorithm, which includes contact interactions within a joint, has been developed to estimate the dominant loading patterns in joints based on the density distribution of bone. The algorithm is applied to the proximal femur of a chimpanzee, gorilla and grizzly bear and is compared to the results obtained in a companion paper that uses a non-contact (linear) version of the density-based load estimation method. Results from the contact algorithm are consistent with those from the linear method. While the contact algorithm is substantially more complex than the linear method, it has some added benefits. First, since contact between the two interacting surfaces is incorporated into the load estimation method, the pressure distributions selected by the method are more likely indicative of those found in vivo. Thus, the pressure distributions predicted by the algorithm are more consistent with the in vivo loads that were responsible for producing the given distribution of bone density. Additionally, the relative positions of the interacting bones are known for each pressure distribution selected by the algorithm. This should allow the pressure distributions to be related to specific types of activities. The ultimate goal is to develop a technique that can predict dominant joint loading patterns and relate these loading patterns to specific types of locomotion and/or activities. PMID:16439233
Volume estimation of multi-density nodules with thoracic CT
NASA Astrophysics Data System (ADS)
Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas
2014-03-01
The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.
Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar
NASA Astrophysics Data System (ADS)
Vickers, H.; Kosch, M. J.; Sutton, E. K.; Ogawa, Y.; La Hoz, C.
2012-12-01
The unique coupling of the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the EISCAT Svalbard Radar (ESR) using the year-long operations of the first year of the International Polar Year (IPY) from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradient and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of IPY during the recent very quiet solar minimum. Comparison with the MSIS model shows that at 250 km, close to the F-layer peak the ESR estimates of the atomic oxygen density are typically a factor 1.2 smaller than the MSIS model when data are averaged over the IPY. Differences between MSIS and ESR estimates are found also to depend on both season and magnetic disturbance, with largest discrepancies noted during winter months. At 350 km, very close agreement with the MSIS model is achieved without evidence of seasonal dependence. This altitude was also close to the orbital altitude of the CHAMP satellite during IPY, allowing a comparison of in-situ measurements and radar estimates of the neutral density. Using a total of 10 in-situ passes by the CHAMP satellite above Svalbard, we show that the estimates made using this technique fall within the error bars of the measurements. We show that the method works best in the height range ~300-400 km where our assumptions are satisfied and we anticipate that the technique should be suitable for future thermospheric studies related to geomagnetic storm activity and long-term climate change.
Estimating black bear density using DNA data from hair snares
Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.
2010-01-01
DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Thermospheric atomic oxygen density estimates using the EISCAT Svalbard Radar
NASA Astrophysics Data System (ADS)
Vickers, H.; Kosch, M. J.; Sutton, E.; Ogawa, Y.; La Hoz, C.
2013-03-01
Coupling between the ionized and neutral atmosphere through particle collisions allows an indirect study of the neutral atmosphere through measurements of ionospheric plasma parameters. We estimate the neutral density of the upper thermosphere above ~250 km with the European Incoherent Scatter Svalbard Radar (ESR) using the year-long operations of the International Polar Year from March 2007 to February 2008. The simplified momentum equation for atomic oxygen ions is used for field-aligned motion in the steady state, taking into account the opposing forces of plasma pressure gradients and gravity only. This restricts the technique to quiet geomagnetic periods, which applies to most of the International Polar Year during the recent very quiet solar minimum. The method works best in the height range ~300-400 km where our assumptions are satisfied. Differences between Mass Spectrometer and Incoherent Scatter and ESR estimates are found to vary with altitude, season, and magnetic disturbance, with the largest discrepancies during the winter months. A total of 9 out of 10 in situ passes by the CHAMP satellite above Svalbard at 350 km altitude agree with the ESR neutral density estimates to within the error bars of the measurements during quiet geomagnetic periods.
A projection and density estimation method for knowledge discovery.
Stanski, Adam; Hellwich, Olaf
2012-01-01
A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675
Effect of Random Clustering on Surface Damage Density Estimates
Matthews, M J; Feit, M D
2007-10-29
Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.
Accurate photometric redshift probability density estimation - method comparison and application
NASA Astrophysics Data System (ADS)
Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-10-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ? 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.
Wavelet-Based Real-Time Diagnosis of Complex Systems
NASA Technical Reports Server (NTRS)
Gulati, Sandeep; Mackey, Ryan
2003-01-01
A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Chang, Ching-Wei; Mycek, Mary-Ann
2014-01-01
We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891
Wavelet-based scale-dependent detection of neurological action potentials.
Escol, Ricardo; Bonnet, Stphane; Guillemaud, Rgis; Magnin, Isabelle
2007-01-01
We study different wavelet-based algorithms for the detection of neurological action potentials recorded using micro-electrode arrays (MEA). We plan to develop a new family of ASIC-embedded low power algorithms close to the recording sites. We use the wavelet theory, not for previous-to-the-detection denoising stage (as it is usually used for) but for the detection itself. Different adaptive methods are presented with varying complexity levels. We demonstrate that wavelet-based detection of extracellular action potentials is superior than traditional and simpler approaches, at the expense of a slightly larger computational load. Moreover, our method is shown to be fully compatible with an embedded implementation. Proposed algorithms are applied to simulated datasets using a simplified model of the American cockroach antennal lobe. PMID:18002350
Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps
NASA Astrophysics Data System (ADS)
Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbie?, A.; Opolski, G.; Maniewski, R.
2011-01-01
T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.
Density estimation on multivariate censored data with optional Plya tree.
Seok, Junhee; Tian, Lu; Wong, Wing H
2014-01-01
Analyzing the failure times of multiple events is of interest in many fields. Estimating the joint distribution of the failure times in a non-parametric way is not straightforward because some failure times are often right-censored and only known to be greater than observed follow-up times. Although it has been studied, there is no universally optimal solution for this problem. It is still challenging and important to provide alternatives that may be more suitable than existing ones in specific settings. Related problems of the existing methods are not only limited to infeasible computations, but also include the lack of optimality and possible non-monotonicity of the estimated survival function. In this paper, we proposed a non-parametric Bayesian approach for directly estimating the density function of multivariate survival times, where the prior is constructed based on the optional Plya tree. We investigated several theoretical aspects of the procedure and derived an efficient iterative algorithm for implementing the Bayesian procedure. The empirical performance of the method was examined via extensive simulation studies. Finally, we presented a detailed analysis using the proposed method on the relationship among organ recovery times in severely injured patients. From the analysis, we suggested interesting medical information that can be further pursued in clinics. PMID:23902636
Comparative study of different wavelet based neural network models for rainfall-runoff modeling
NASA Astrophysics Data System (ADS)
Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.
2014-07-01
The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.
Wavelet-based nearest-regularized subspace for noise-robust hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Wei; Liu, Kui; Su, Hongjun
2014-01-01
A wavelet-based nearest-regularized-subspace classifier is proposed for noise-robust hyperspectral image (HSI) classification. The nearest-regularized subspace, coupling the nearest-subspace classification with a distance-weighted Tikhonov regularization, was designed to only consider the original spectral bands. Recent research found that the multiscale wavelet features [e.g., extracted by redundant discrete wavelet transformation (RDWT)] of each hyperspectral pixel are potentially very useful and less sensitive to noise. An integration of wavelet-based features and the nearest-regularized-subspace classifier to improve the classification performance in noisy environments is proposed. Specifically, wealthy noise-robust features provided by RDWT based on hyperspectral spectrum are employed in a decision-fusion system or as preprocessing for the nearest-regularized-subspace (NRS) classifier. Improved performance of the proposed method over the conventional approaches, such as support vector machine, is shown by testing several HSIs. For example, the NRS classifier performed with an accuracy of 65.38% for the AVIRIS Indian Pines data with 75 training samples per class under noisy conditions (signal-to-noise ratio=36.87 dB), while the wavelet-based classifier can obtain an accuracy of 71.60%, resulting in an improvement of approximately 6%.
Fluorescence diffuse optical tomography: a wavelet-based model reduction
NASA Astrophysics Data System (ADS)
Frassati, Anne; DaSilva, Anabela; Dinten, Jean-Marc; Georges, Didier
2007-07-01
Fluorescence diffuse optical tomography is becoming a powerful tool for the investigation of molecular events in small animal studies for new therapeutics developments. Here, the stress is put on the mathematical problem of the tomography, that can be formulated in terms of an estimation of physical parameters appearing as a set of Partial Differential Equations (PDEs). The Finite Element Method has been chosen here to resolve the diffusion equation because it has no restriction considering the geometry or the homogeneity of the system. It is nonetheless well-known to be time and memory consuming, mainly because of the large dimensions of the involved matrices. Our principal objective is to reduce the model in order to speed up the model computation. For that, a new method based on a multiresolution technique is chosen. All the matrices appearing in the discretized version of the PDEs are projected onto an orthonormal wavelet basis, and reduced according to the multiresolution method. With the first order resolution, this compression leads to the reduction of a factor 2x2 of the initial dimension, the inversion of the matrices is approximately 4 times faster. A validation study on a phantom was conducted to evaluate the feasibility of this reduction method.
Atmospheric turbulence mitigation using complex wavelet-based fusion.
Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R
2013-06-01
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359
Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics
Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.
2010-01-01
We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positives voxels. PMID:21034833
NASA Astrophysics Data System (ADS)
Rastigejev, Y.; Semakin, A. N.
2012-12-01
In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as they circle the globe. Recently, it was demonstrated that the present global CTMs implemented on quasi-uniform grids are incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. On the contrary, the adaptive wavelet technique is shown to produce highly accurate numerical solutions at a relatively low computational cost. It is demonstrated that the developed WAMR method has significant advantages over conventional non-adaptive computational techniques in terms of accuracy and computational cost for calculations of atmospheric chemical transport numerical. The simulations show excellent ability of the algorithm to adapt the computational grid to a solution containing different scales at different spatial locations so as to produce accurate results at a relatively low computational cost. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
NASA Astrophysics Data System (ADS)
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Estimating tropical-forest density profiles from multibaseline interferometric SAR
NASA Technical Reports Server (NTRS)
Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio
2006-01-01
Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.
Bayesian MCMC Bandwidth Estimation on Kernel Density Estimation for Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Lee, T.; Ouarda, T. B.; Lee, J.
2009-05-01
Recent advances in computational capacity allow the use of more sophisticated approaches that require high computational power, such as the technique of importance sampling and Bayesian Markov Chain Monte Carlo (BMCMC) methods. In flood frequency analysis, the use of BMCMC allows to model the uncertainty associated to quantile estimates obtained through the posterior distributions of model parameters. BMCMC models have been used in association with various parametric distributions in the estimation of flood quantiles. However, they have never been applied with nonparametric distributions for the same objective. In this paper, the BMCMC is used for the selection of a bandwidth of the kernel density estimate (KDE) in order to carry out extreme value frequency analysis. KDE has not gained much acceptance in the field of frequency analysis because of its feature of easily dying off away from the occurrence point (low predictive ability). The use of Gamma kernels allow to solve this problem because of the caracteristics of thicker right tails and variable kernel smoothness. Even if the bandwidth is not changed, the gamma kernel permits alternating its variance according to the estimate point. Furthermore, BMCMC provides the uncertainty induced from the bandwidth selection. The predictive ability of the Gamma KDE is investigated with Monte Carlo simulation. Results show the usefulness of the gamma kernel density estimate in flood freuquency analysis.
The effectiveness of tape playbacks in estimating Black Rail densities
Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.
1999-01-01
Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys
Cortical cell and neuron density estimates in one chimpanzee hemisphere.
Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H
2016-01-19
The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates. PMID:26729880
Estimating Foreign-Object-Debris Density from Photogrammetry Data
NASA Technical Reports Server (NTRS)
Long, Jason; Metzger, Philip; Lane, John
2013-01-01
Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.
NASA Astrophysics Data System (ADS)
Walia, Suresh Kumar; Patel, Raj Kumar; Vinayak, Hemant Kumar; Parti, Raman
2013-12-01
The objective of this study is to bring out the errors introduced during construction which are overlooked during the physical verification of the bridge. Such errors can be pointed out if the symmetry of the structure is challenged. This paper thus presents the study of downstream and upstream truss of newly constructed steel bridge using time-frequency and wavelet-based approach. The variation in the behavior of truss joints of bridge with variation in the vehicle speed has been worked out to determine their flexibility. The testing on the steel bridge was carried out with the same instrument setup on both the upstream and downstream trusses of the bridge at two different speeds with the same moving vehicle. The nodal flexibility investigation is carried out using power spectral density, short-time Fourier transform, and wavelet packet transform with respect to both the trusses and speed. The results obtained have shown that the joints of both upstream and downstream trusses of the bridge behave in a different manner even if designed for the same loading due to constructional variations and vehicle movement, in spite of the fact that the analytical models present a simplistic model for analysis and design. The difficulty of modal parameter extraction of the particular bridge under study increased with the increase in speed due to decreased excitation time.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
Iterated denoising and fusion to improve the image quality of wavelet-based coding
NASA Astrophysics Data System (ADS)
Song, Beibei
2011-06-01
An iterated denoising and fusion method is presented to improve the image quality of wavelet-based coding. Firstly, iterated image denoising is used to reduce ringing and staircase noise along curving edges and improve edge regularity. Then, we adopt wavelet fusion method to enhance image edges, protect non-edge regions and decrease blurring artifacts during the process of denoising. Experimental results have shown that the proposed scheme is capable of improving both the subjective and the objective performance of wavelet decoders, such as JPEG2000 and SPIHT.
WaveGuide: a joint wavelet-based image representation and description system.
Liang, K C; Kuo, C J
1999-01-01
Data representation and content description are two basic components required by the management of any image database. A wavelet based system, called the WaveGuide, which integrates these two components in a unified framework, is proposed in this work. In the WaveGuide system, images are compressed with the state-of-the-art wavelet coding technique and indexed with color, texture, and object shape descriptors generated in the wavelet domain during the encoding process. All the content descriptors are extracted by machines automatically with a low computational complexity and stored with a low memory space. Extensive experiments are performed to demonstrate the performance of the new approach. PMID:18267436
Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S
2015-02-01
This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms. PMID:25698047
Robust location and spread measures for nonparametric probability density function estimation.
Lpez-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963
Sarkar, Indranil; Bansal, Manu
2007-08-01
In this correspondence, we propose a wavelet-based hierarchical approach using mutual information (MI) to solve the correspondence problem in stereo vision. The correspondence problem involves identifying corresponding pixels between images of a given stereo pair. This results in a disparity map, which is required to extract depth information of the relevant scene. Until recently, mostly correlation-based methods have been used to solve the correspondence problem. However, the performance of correlation-based methods degrades significantly when there is a change in illumination between the two images of the stereo pair. Recent studies indicate MI to be a more robust stereo matching metric for images affected by such radiometric distortions. In this short correspondence paper, we compare the performances of MI and correlation-based metrics for different types of illumination changes between stereo images. MI, as a statistical metric, is computationally more expensive. We propose a wavelet-based hierarchical technique to counter the increase in computational cost and show its effectiveness in stereo matching. PMID:17702296
NASA Astrophysics Data System (ADS)
Miner, Nadine Elizabeth
1998-09-01
This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.
Prediction and identification using wavelet-based recurrent fuzzy neural networks.
Lin, Cheng-Jian; Chin, Cheng-Chung
2004-10-01
This paper presents a wavelet-based recurrent fuzzy neural network (WRFNN) for prediction and identification of nonlinear dynamic systems. The proposed WRFNN model combines the traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). This paper adopts the nonorthogonal and compactly supported functions as wavelet neural network bases. Temporal relations embedded in the network are caused by adding some feedback connections representing the memory units into the second layer of the feedforward wavelet-based fuzzy neural networks (WFNN). An online learning algorithm, which consists of structure learning and parameter learning, is also presented. The structure learning depends on the degree measure to obtain the number of fuzzy rules and wavelet functions. Meanwhile, the parameter learning is based on the gradient descent method for adjusting the shape of the membership function and the connection weights of WNN. Finally, computer simulations have demonstrated that the proposed WRFNN model requires fewer adjustable parameters and obtains a smaller rms error than other methods. PMID:15503511
Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N
2012-01-01
The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323
3D wavelet-based codec for lossy compression of pre-scan-converted ultrasound video
NASA Astrophysics Data System (ADS)
Andrew, Rex K.; Stewart, Brent K.; Langer, Steven G.; Stegbauer, Keith C.
1999-05-01
We present a wavelet-based video codec based on a 3D wavelet transformer, a uniform quantizer/dequantizer and an arithmetic encoder/decoder. The wavelet transformer uses biorthogonal Antonini wavelets in the two spatial dimensions and Haar wavelets in the time dimensions. Multiple levels of decomposition are supported. The codec has been applied to pre-scan-converted ultrasound image data and does not produce the type of blocking artifacts that occur in MPEG- compressed video. The PSNR at a given compression rate increases with the number of levels of decomposition: for our data at 50:1 compression, the PSNR increases from 18.4 dB at one level to 24.0 dB at four levels of decomposition. Our 3D wavelet-based video codec provides the high compression rates required to transmit diagnostic ultrasound video over existing low bandwidth links without introducing the blocking artifacts which have been demonstrated to diminish clinical utility.
Wavelet-based multiscale anisotropic diffusion for speckle reduction and edge enhancement
NASA Astrophysics Data System (ADS)
Wang, Yi; Niu, Ruiqing; Wu, Ke; Yu, Xin
2009-10-01
In order to improve signal-to-noise ratio (SNR) and image quality, this paper introduces a wavelet-based multiscale anisotropic diffusion algorithm to remove speckle noise and enhance edges. In our algorithm, we use the tool of wavelet to construct a linear scale-space for the speckle image. Due to the smoothing functionality of the scaling function, the wavelet-based multiscale representation of the speckle image is much more stationary than the raw speckle image. Noise is mostly located in the finest scale and tends to decrease as the scale increases. Furthermore, a robust speckle reduction anisotropic diffusion (SRAD) is to be proposed and we perform the improved SRAD on the stationary scale-space rather than on the rough speckle image domain. Qualitative experiments based on a speckle Synthetic aperture radar (SAR) image show the elegant characteristics of edge-preserving filtering versus the traditional adaptive filters. Quantitative analyses, based on the first order statistics and Equivalent Number of Looks, confirm the validity and effectiveness of the proposed algorithm.
Estimation of density of mongooses with capture-recapture and distance sampling
Corn, J.L.; Conroy, M.J.
1998-01-01
We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.
Nonparametric estimation of population density for line transect sampling using FOURIER series
Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.
1979-01-01
A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.
Demonstration of line transect methodologies to estimate urban gray squirrel density
Hein, E.W.
1997-11-01
Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimating urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.
Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.
Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M
2015-05-01
Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology. PMID:26236833
Estimation of current density distribution under electrodes for external defibrillation
Krasteva, Vessela Tz; Papazov, Sava P
2002-01-01
Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise. PMID:12537593
Bayesian Nonparametric Functional Data Analysis Through Density Estimation.
Rodrguez, Abel; Dunson, David B; Gelfand, Alan E
2009-01-01
In many modern experimental settings, observations are obtained in the form of functions, and interest focuses on inferences on a collection of such functions. We propose a hierarchical model that allows us to simultaneously estimate multiple curves nonparametrically by using dependent Dirichlet Process mixtures of Gaussians to characterize the joint distribution of predictors and outcomes. Function estimates are then induced through the conditional distribution of the outcome given the predictors. The resulting approach allows for flexible estimation and clustering, while borrowing information across curves. We also show that the function estimates we obtain are consistent on the space of integrable functions. As an illustration, we consider an application to the analysis of Conductivity and Temperature at Depth data in the north Atlantic. PMID:19262739
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolios loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moodys. However, it has a fatal defect that it cant fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moodys new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Wavelet bases on the interval with short support and vanishing moments
NASA Astrophysics Data System (ADS)
Bmov, Daniela; ?ern, Dana; Fin?k, Vclav
2012-11-01
Jia and Zhao have recently proposed a construction of a cubic spline wavelet basis on the interval which satisfies homogeneous Dirichlet boundary conditions of the second order. They used the basis for solving fourth order problems and they showed that Galerkin method with this basis has superb convergence. The stiffness matrices for the biharmonic equation defined on a unit square have very small and uniformly bounded condition numbers. In our contribution, we design wavelet bases with the same scaling functions and different wavelets. We show that our basis has the same quantitative properties as the wavelet basis constructed by Jia and Zhao and additionally the wavelets have vanishing moments. It enables to use this wavelet basis in adaptive wavelet methods and non-adaptive sparse grid methods. Furthermore, we even improve the condition numbers of the stiffness matrices by including lower levels.
An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report
Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.
1998-11-01
The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.
A wavelet-based multiresolution approach to large-eddy simulation of turbulence
NASA Astrophysics Data System (ADS)
de la Llave Plata, M.; Cant, R. S.
2010-10-01
The wavelet-based multiresolution analysis (MRA) technique is used to develop a modelling approach to large-eddy simulation (LES) and its associated subgrid closure problem. The LES equations are derived by projecting the Navier-Stokes (N-S) equations onto a hierarchy of wavelet spaces. A numerical framework is then developed for the solution of the large and the small-scale equations. This is done in one dimension, for the Burgers equation, and in three dimensions, for the N-S problem. The proposed methodology is assessed in a priori tests on an atmospheric turbulent time series and on data from direct numerical simulation. A posteriori (dynamic) tests are also carried out for decaying and force-driven Burgers turbulence.
Wavelet-based adaptive numerical simulation of unsteady 3D flow around a bluff body
NASA Astrophysics Data System (ADS)
de Stefano, Giuliano; Vasilyev, Oleg
2012-11-01
The unsteady three-dimensional flow past a two-dimensional bluff body is numerically simulated using a wavelet-based method. The body is modeled by exploiting the Brinkman volume-penalization method, which results in modifying the governing equations with the addition of an appropriate forcing term inside the spatial region occupied by the obstacle. The volume-penalized incompressible Navier-Stokes equations are numerically solved by means of the adaptive wavelet collocation method, where the non-uniform spatial grid is dynamically adapted to the flow evolution. The combined approach is successfully applied to the simulation of vortex shedding flow behind a stationary prism with square cross-section. The computation is conducted at transitional Reynolds numbers, where fundamental unstable three-dimensional vortical structures exist, by well-predicting the unsteady forces arising from fluid-structure interaction.
Daniel, Ebenezer; Anitha, J
2016-04-01
Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462
A wavelet-based watermarking algorithm for ownership verification of digital images.
Wang, Yiwei; Doherty, John F; Van Dyck, Robert E
2002-01-01
Access to multimedia data has become much easier due to the rapid growth of the Internet. While this is usually considered an improvement of everyday life, it also makes unauthorized copying and distributing of multimedia data much easier, therefore presenting a challenge in the field of copyright protection. Digital watermarking, which is inserting copyright information into the data, has been proposed to solve the problem. In this paper, we first discuss the features that a practical digital watermarking system for ownership verification requires. Besides perceptual invisibility and robustness, we claim that the private control of the watermark is also very important. Second, we present a novel wavelet-based watermarking algorithm. Experimental results and analysis are then given to demonstrate that the proposed algorithm is effective and can be used in a practical system. PMID:18244614
Wavelet-based Poisson Solver for use in Particle-In-CellSimulations
Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.
2005-05-13
We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.
Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis
Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun
2014-01-01
To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673
Corrosion in reinforced concrete panels: wireless monitoring and wavelet-based analysis.
Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun
2014-01-01
To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673
Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices
NASA Astrophysics Data System (ADS)
Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.
2013-12-01
The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.
Wavelet-based built-in damage detection and identification for composites
NASA Astrophysics Data System (ADS)
Yan, G.; Zhou, Lily L.; Yuan, F. G.
2005-05-01
In this paper, a wavelet-based built-in damage detection and identification algorithm for carbon fiber reinforced polymer (CFRP) laminates is proposed. Lamb waves propagating in laminates are first modeled analytically using higher-order plate theory and compared them with experimental results in terms of group velocity. Distributed piezoelectric transducers are used to generate and monitor the fundamental ultrasonic Lamb waves in the laminates with narrowband frequencies. A signal processing scheme based on wavelet analysis is applied on the sensor signals to extract the group velocity of the wave propagating in the laminates. Combined with the theoretically computed wave velocity, a genetic algorithms (GA) optimization technique is employed to identify the location and size of the damage. The applicability of this proposed method to detect and size the damage is demonstrated by experimental studies on a composite plate with simulated delamination damages.
Rajaraman, R; Hariharan, G
2014-07-01
In this paper, we have applied an efficient wavelet-based approximation method for solving the Fisher's type and the fractional Fisher's type equations arising in biological sciences. To the best of our knowledge, until now there is no rigorous wavelet solution has been addressed for the Fisher's and fractional Fisher's equations. The highest derivative in the differential equation is expanded into Legendre series; this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the Fisher's equation and the fractional Fisher's equation are converted into a system of algebraic equations. Block-pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24908255
Estimating ?/s of QCD matter at high baryon densities
NASA Astrophysics Data System (ADS)
Karpenko, Iu.; Bleicher, M.; Huovinen, P.; Petersen, H.
2016-01-01
We report on the application of a cascade + viscous hydro + cascade model for heavy ion collisions in the RHIC Beam Energy Scan range, ?snn = 6.3200 GeV. By constraining model parameters to reproduce the data we find that the effective (average) value of the shear viscosity over entropy density ratio ?/s decreases from 0.2 to 0.08 when collision energy grows from ?sNN ? 7 to 39 GeV.
Dose-volume histogram prediction using density estimation.
Skarpman Munter, Johanna; Sjölund, Jens
2015-09-01
Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data. PMID:26305670
Estimated global nitrogen deposition using NO2 column density
Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao
2013-01-01
Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m?2 yr?1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m?2 yr?1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.
Links between PPCA and subspace methods for complete Gaussian density estimation.
Wang, Chong; Wang, Wenyuan
2006-05-01
High-dimensional density estimation is a fundamental problem in pattern recognition and machine learning areas. In this letter, we show that, for complete high-dimensional Gaussian density estimation, two widely used methods, probabilistic principal component analysis and a typical subspace method using eigenspace decomposition, actually give the same results. Additionally, we present a unified view from the aspect of robust estimation of the covariance matrix. PMID:16722180
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Technical Reports Server (NTRS)
Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)
2002-01-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
RADIATION PRESSURE DETECTION AND DENSITY ESTIMATE FOR 2011 MD
Micheli, Marco; Tholen, David J.; Elliott, Garrett T. E-mail: tholen@ifa.hawaii.edu
2014-06-10
We present our astrometric observations of the small near-Earth object 2011 MD (H ∼ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5σ. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of ρ=(640±330)kg m{sup −3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.
A comparison of 2 techniques for estimating deer density
Storm, G.L.; Cottam, D.F.; Yahner, R.H.; Nichols, J.D.
1977-01-01
We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.
Estimating insect flight densities from attractive trap catches and flight height distributions.
Byers, John A
2012-05-01
Methods and equations have not been developed previously to estimate insect flight densities, a key factor in decisions regarding trap and lure deployment in programs of monitoring, mass trapping, and mating disruption with semiochemicals. An equation to estimate densities of flying insects per hectare is presented that uses the standard deviation (SD) of the vertical flight distribution, trapping time, the trap's spherical effective radius (ER), catch at the mean flight height (as estimated from a best-fitting normal distribution with SD), and an estimated average flight speed. Data from previous reports were used to estimate flight densities with the equations. The same equations can use traps with pheromone lures or attractive colors with a measured effective attraction radius (EAR) instead of the ER. In practice, EAR is more useful than ER for flight density calculations since attractive traps catch higher numbers of insects and thus can measure lower populations more readily. Computer simulations in three dimensions with varying numbers of insects (density) and varying EAR were used to validate the equations for density estimates of insects in the field. Few studies have provided data to obtain EAR, SD, speed, and trapping time to estimate flight densities per hectare. However, the necessary parameters can be measured more precisely in future studies. PMID:22527056
A simple alternative to line transects of nests for estimating orangutan densities.
van Schaik, Carel P; Wich, Serge A; Utami, Sri Suci; Odom, Kisar
2005-10-01
We conducted a validation of the line transect technique to estimate densities of orangutan (Pongo pygmaeus) nests in a Bornean swamp forest, and compared these results with density estimates based on nest counts in plots and on female home ranges. First, we examined the accuracy of the line transect method. We found that the densities based on a pass in both directions of two experienced pairs of observers was 27% below a combined sample based on transect walks by eight pairs of observers, suggesting that regular line-transect densities may seriously underestimate true densities. Second, we compared these results with those obtained by nest counts in 0.2-ha plots. This method produced an estimated 15.24 nests/ha, as compared to 10.0 and 10.9, respectively, by two experienced pairs of observers who walked a line transect in both directions. Third, we estimated orangutan densities based on female home range size and overlap and the proportion of females in the population, which produced a density of 4.25-4.5 individuals/km(2) . Converting nest densities into orangutan densities, using locally estimated parameters for nest production rate and proportion of nest builders in the population, we found that density estimates based on the line transect results of the most experienced pairs on a double pass were 2.82 and 3.08 orangutans/km(2), based on the combined line transect data are 4.04, and based on plot counts are 4.30. In this swamp forest, plot counts therefore give more accurate estimates than do line transects. We recommend that this new method be evaluated in other forest types as well. PMID:15983724
Niegowski, Maciej; Zivanovic, Miroslav
2016-03-01
We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. PMID:26774422
Extraction of wave characteristics from wavelet-based spectral finite element formulation
NASA Astrophysics Data System (ADS)
Mitra, Mira; Gopalakrishnan, S.
2006-11-01
In this paper, a spectrally formulated wavelet finite element is developed and is used not only to study wave propagation in 1-D waveguides but also to extract the wave characteristics, namely the spectrum and dispersion relation for these waveguides. The use of compactly supported Daubechies wavelet basis circumvents several drawbacks of conventional FFT-based Spectral Finite Element Method (FSFEM) due to the required assumption of periodicity, particularly for time domain analysis. In this work, a study is done to use the formulated Wavelet-based Spectral Finite Element (WSFE) directly for such frequency domain analysis. This study shows that in WSFE formulation, a constraint on the time sampling rate is paced to avoid spurious dispersion being introduced in the analysis. Numerical experiments are performed to study frequency-dependent wave characteristics (dispersion and spectrum relations) in elementary rod, Euler-Bernoulli and Timoshenko beams. The effect of sampling rate on the accuracy of WSFE solution for both impulse and modulated sinusoidal loading with different frequency content is shown through different examples. In all above cases, comparison with FSFEM are provided to highlight the advantages and limitations of WSFE.
Performance evaluation of wavelet-based face verification on a PDA recorded database
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2006-05-01
The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.
Construction of compactly supported biorthogonal wavelet based on Human Visual System
NASA Astrophysics Data System (ADS)
Hu, Haiping; Hou, Weidong; Liu, Hong; Mo, Yu L.
2000-11-01
As an important analysis tool, wavelet transform has made a great development in image compression coding, since Daubechies constructed a kind of compact support orthogonal wavelet and Mallat presented a fast pyramid algorithm for wavelet decomposition and reconstruction. In order to raise the compression ratio and improve the visual quality of reconstruction, it becomes very important to find a wavelet basis that fits the human visual system (HVS). Marr wavelet, as it is known, is a kind of wavelet, so it is not suitable for implementation of image compression coding. In this paper, a new method is provided to construct a kind of compactly supported biorthogonal wavelet based on human visual system, we employ the genetic algorithm to construct compactly supported biorthogonal wavelet that can approximate the modulation transform function for HVS. The novel constructed wavelet is applied to image compression coding in our experiments. The experimental results indicate that the visual quality of reconstruction with the new kind of wavelet is equivalent to other compactly biorthogonal wavelets in the condition of the same bit rate. It has good performance of reconstruction, especially used in texture image compression coding.
Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions.
Daqrouq, K; Dobaie, A
2016-01-01
An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412
Wavelet-based adaptive LES of turbulent flow around a square-cylinder
NASA Astrophysics Data System (ADS)
de Stefano, Giuliano; Vasilyev, Oleg V.
2013-11-01
The incompressible turbulent flow around a two-dimensional bluff body with square cross-section is simulated by using a wavelet-based adaptive LES method. The presence of the obstacle is modeled with the Brinkman volume-penalization technique, which results in modifying the governing equations with the addition of an appropriate forcing term inside the spatial region occupied by the cylinder. The localized dynamic kinetic-energy-based approach (De Stefano et al., PF 2008) is utilized to model the residual stresses term in the wavelet-filtered volume-penalized incompressible Navier-Stokes equations. The filtered momentum and SGS energy equations are numerically solved by means of the adaptive wavelet collocation method, where the time-dependent non-uniform spatial grid is dynamically determined following the flow evolution. The combined volume-penalization/wavelet-collocation approach is successfully applied to the simulation of turbulent vortex shedding flow behind a stationary prism with square cross-section at moderate Reynolds number. The present results are in good agreement with both experimental findings and data from non-adaptive numerical solutions.
Hej?, Jakub; Vtek, Martin; Ronzhina, Marina; Novkov, Marie; Kol?ov, Jana
2015-09-01
We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367
A wavelet-based approach to detecting liveness in fingerprint scanners
NASA Astrophysics Data System (ADS)
Abhyankar, Aditya S.; Schuckers, Stephanie C.
2004-08-01
In this work, a method to provide fingerprint vitality authentication, in order to improve vulnerability of fingerprint identification systems to spoofing is introduced. The method aims at detecting 'liveness' in fingerprint scanners by using the physiological phenomenon of perspiration. A wavelet based approach is used which concentrates on the changing coefficients using the zoom-in property of the wavelets. Multiresolution analysis and wavelet packet analysis are used to extract information from low frequency and high frequency content of the images respectively. Daubechies wavelet is designed and implemented to perform the wavelet analysis. A threshold is applied to the first difference of the information in all the sub-bands. The energy content of the changing coefficients is used as a quantified measure to perform the desired classification, as they reflect a perspiration pattern. A data set of approximately 30 live, 30 spoof, and 14 cadaver fingerprint images was divided with first half as a training data while the other half as the testing data. The proposed algorithm was applied to the training data set and was able to completely classify 'live' fingers from 'not live' fingers, thus providing a method for enhanced security and improved spoof protection.
Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain
2014-01-01
Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510
The Analysis of Surface EMG Signals with the Wavelet-Based Correlation Dimension Method
Zhang, Yanyan; Wang, Jue
2014-01-01
Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240
The 3D vector wavelet-based subgrid scale model for LES of nonequilibrium turbulence
NASA Astrophysics Data System (ADS)
Zimin, Valery; Hussain, Fazle
1995-04-01
We have laid the foundation to develop and validate LES using wavelets as a functional basis and based on subgrid scale (SGS) modeling using vector wavelets. Wavelet LES (WLES) consists of subgrid scale model equations (SSM) and space-resolved model (SRM) equations. Using a vector wavelet decomposition of the velocity field a simple model for locally isotropic turbulence has been derived from the Navier-Stokes equation. This model, which involves no empirical or adhoc parameter Incorporates nonlocal inter-scale interactions, reveals backscatter and can be applied to represent small-scale turbulence in LES schemes. Stationary solutions of the model equation & produce the Kolmogorov k(5/3) inertial spectrum and the k(4) infra-red spectrum. We have completed derivation of the SRM equations based on the helical wave decomposition. We will test the SRM equations using the computational resources in this NAS operational year. A wavelet-based subgrid-scale model (WSSM) will be generalized to account for anisotropic and inhomogeneous turbulence in wall-bounded flows and will employ a nonuniform grid to resolve the near-wall structures.
Non-orthogonal wavelet bases for 1-D and 2-D simulation
NASA Astrophysics Data System (ADS)
Lewalle, Jacques
1999-11-01
A non-orthogonal wavelet expansion of 1-D and 2-D fields is used as the basis for simulation of Burgers equation and two-dimensional incompressible Navier-Stokes flows. The approach relies on several mathematical properties of Hermitian wavelets, namely: 1/ the transformation of viscous diffusion into an invariant translation in wavelet space, 2/ the representation of the pressure term similar to its Fourier representation, 3/pairwise interactions resulting from the convective terms, and 4/ a natural discretization of the continuous field by multipole expansion around energy maxima in wavelet space (see Bull. Am. Phys. Soc. 43 # 9, p 2002 BK6-7, 1998). In this study, the discrete events (leading terms in an expansion of an arbitrary continuous field) are given initially by their location, scale and magnitude. The evolution equations for these parameters are derived from the exact equations (Burgers or Navier-Stokes) and comparison of conventional and wavelet-based solutions is given. Points of discussion include the representation of 3-D turbulence as a population of discrete objects, the modeling of equilibrium populations of such objects, and their interactions with `coherent' events.
Fast Wavelet Based Functional Models for Transcriptome Analysis with Tiling Arrays
Clement, Lieven; De Beuf, Kristof; Thas, Olivier; Vuylsteke, Marnik; Irizarry, Rafael A.; Crainiceanu, Ciprian M.
2013-01-01
For a better understanding of the biology of an organism, a complete description is needed of all regions of the genome that are actively transcribed. Tiling arrays are used for this purpose. They allow for the discovery of novel transcripts and the assessment of differential expression between two or more experimental conditions such as genotype, treatment, tissue, etc. In tiling array literature, many efforts are devoted to transcript discovery, whereas more recent developments also focus on differential expression. To our knowledge, however, no methods for tiling arrays have been described that can simultaneously assess transcript discovery and identify differentially expressed transcripts. In this paper, we adopt wavelet based functional models to the context of tiling arrays. The high dimensionality of the data triggered us to avoid inference based on Bayesian MCMC methods. Instead, we introduce a fast empirical Bayes method that provides adaptive regularization of the functional effects. A simulation study and a case study illustrate that our approach is well suited for the simultaneous assessment of transcript discovery and differential expression in tiling array studies, and that it outperforms methods that accomplish only one of these tasks. PMID:22499683
Wavelet-based detection of abrupt changes in natural frequencies of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, K.; Staszewski, W. J.; Basu, B.; Uhl, T.
2015-12-01
Detection of abrupt changes in natural frequencies from vibration responses of time-variant systems is a challenging task due to the complex nature of physics involved. It is clear that the problem needs to be analysed in the combined time-frequency domain. The paper proposes an application of the input-output wavelet-based Frequency Response Function for this analysis. The major focus and challenge relate to ridge extraction of the above time-frequency characteristics. It is well known that classical ridge extraction procedures lead to ridges that are smooth. However, this property is not desired when abrupt changes in the dynamics are considered. The methods presented in the paper are illustrated using simulated and experimental multi-degree-of-freedom systems. The results are compared with the classical Frequency Response Function and with the output only analysis based on the wavelet auto-power response spectrum. The results show that the proposed method captures correctly the dynamics of the analysed time-variant systems.
Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh
2015-09-01
Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients. PMID:26126613
Kim, Byung S; Yoo, Sun K
2007-09-01
The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824
NASA Astrophysics Data System (ADS)
Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun
2016-04-01
The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.
Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions
Daqrouq, K.; Dobaie, A.
2016-01-01
An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412
A wavelet-based image quality metric for the assessment of 3D synthesized views
NASA Astrophysics Data System (ADS)
Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick
2013-03-01
In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.
Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan
2014-05-01
Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044
Dimensionality reduction for density ratio estimation in high-dimensional spaces.
Sugiyama, Masashi; Kawanabe, Motoaki; Chui, Pui Ling
2010-01-01
The ratio of two probability density functions is becoming a quantity of interest these days in the machine learning and data mining communities since it can be used for various data processing tasks such as non-stationarity adaptation, outlier detection, and feature selection. Recently, several methods have been developed for directly estimating the density ratio without going through density estimation and were shown to work well in various practical problems. However, these methods still perform rather poorly when the dimensionality of the data domain is high. In this paper, we propose to incorporate a dimensionality reduction scheme into a density-ratio estimation procedure and experimentally show that the estimation accuracy in high-dimensional cases can be improved. PMID:19631506
In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,
Technology Transfer Automated Retrieval System (TEKTRAN)
Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Nonparametric maximum likelihood estimation of probability densities by penalty function methods
NASA Technical Reports Server (NTRS)
Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.
1974-01-01
When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
Ku, Bon Ki; Evans, Douglas E.
2015-01-01
For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as Maynards estimation method) is used. Therefore, it is necessary to quantitatively investigate how much the Maynards estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynards estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynards estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynards estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles. PMID:26526560
Haque, Ekramul
2013-03-01
Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities with the bed heights. PMID:24804024
On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent
2014-05-01
Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.
Wavelet-based clustering of resting state MRI data in the rat.
Medda, Alessio; Hoffmann, Lukas; Magnuson, Matthew; Thompson, Garth; Pan, Wen-Ju; Keilholz, Shella
2016-01-01
While functional connectivity has typically been calculated over the entire length of the scan (5-10min), interest has been growing in dynamic analysis methods that can detect changes in connectivity on the order of cognitive processes (seconds). Previous work with sliding window correlation has shown that changes in functional connectivity can be observed on these time scales in the awake human and in anesthetized animals. This exciting advance creates a need for improved approaches to characterize dynamic functional networks in the brain. Previous studies were performed using sliding window analysis on regions of interest defined based on anatomy or obtained from traditional steady-state analysis methods. The parcellation of the brain may therefore be suboptimal, and the characteristics of the time-varying connectivity between regions are dependent upon the length of the sliding window chosen. This manuscript describes an algorithm based on wavelet decomposition that allows data-driven clustering of voxels into functional regions based on temporal and spectral properties. Previous work has shown that different networks have characteristic frequency fingerprints, and the use of wavelets ensures that both the frequency and the timing of the BOLD fluctuations are considered during the clustering process. The method was applied to resting state data acquired from anesthetized rats, and the resulting clusters agreed well with known anatomical areas. Clusters were highly reproducible across subjects. Wavelet cross-correlation values between clusters from a single scan were significantly higher than the values from randomly matched clusters that shared no temporal information, indicating that wavelet-based analysis is sensitive to the relationship between areas. PMID:26481903
An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing
Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.
2014-01-01
A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428
Wavelet-Based Spatial Scaling of Coupled Reaction-Diffusion Fields
Mishra, Sudib; Muralidharan, Krishna; Deymier, Pierre; Frantziskonis, G.; Pannala, Sreekanth; Simunovic, Srdjan
2008-01-01
Multiscale schemes for transferring information from fine to coarse scales are typically based on homogenization techniques. Such schemes smooth the fine scale features of the underlying fields, often resulting in the inability to accurately retain the fine scale correlations. In addition, higher-order statistical moments (beyond mean) of the relevant field variables are not necessarily preserved. As a superior alternative to averaging homogenization methods, a wavelet-based scheme for the exchange of information between a reactive and diffusive field in the context of multiscale reaction-diffusion problems is proposed and analyzed. The scheme is shown to be efficient in passing information along scales, from fine to coarse, i.e., upscaling as well as from coarse to fine, i.e., downscaling. It incorporates fine scale statistics (higher-order moments beyond mean), mainly due to the capability of wavelets to represent fields hierarchically. Critical to the success of the scheme is the identification of dominant scales containing the majority of the useful information. The dominant scales in effect specify the coarsest resolution possible. The scheme is applied in detail to the analysis of a diffusive system with a chemically reacting boundary. Reactions are simulated using kinetic Monte Carlo (kMC) and diffusion is solved by finite differences (FDs). Spatial scale differences are present at the interface of the kMC sites and the diffusion grid. The computational efficiency of the scheme is compared to results obtained by averaging homogenization, and to results from a benchmark scheme that ensures spatial scale parity between kMC and FD.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Ksel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. PMID:21682386
NASA Astrophysics Data System (ADS)
Kalra, A.; Ahmad, S.; Stephen, H.
2009-12-01
Evaluating the hydrologic impacts of climate variability due to changes in precipitation has been an important and challenging task in the field of hydrology. This requires estimation of rainfall, preserving its spatial and temporal variability. The current research focuses on 1) analyzing changes (trend/step) in seasonal precipitation and 2) simulating seasonal precipitation using k-nearest neighbor (k-nn) non-parametric technique for 29 climate divisions covering the entire Colorado River Basin. The current research analyzes water year precipitation data ranging from 1900 to 2008 subdivided into four seasons i.e. autumn (October-December), winter (January-March), spring (April-June), and summer (July-September). Two statistical tests i.e., Mann Kendal and Spearmans Rho are used to evaluate trend changes and Rank Sum test is used to identify the step change in seasonal precipitation for the selected climate divisions. The results show that changes are mostly during winter season. Eleven divisions show increase in precipitation, 6 divisions show decrease and the remaining 12 show no change in the precipitation for the period of record. A total of eight climate divisions observed changes during autumn season precipitation, with four climate divisions showing increasing and remaining four showing decreasing changes. Decreasing precipitation changes are observed for 6 divisions during spring season. In summer season, three climate divisions show increase and one division showed decrease in precipitation. The increasing precipitation changes during winter season are attributed to gradual step change, whereas the decreasing changes are due to trend changes. The decreasing precipitation changes in spring season occurred due to trend changes. The summer season changes occurred due to a gradual step change. During autumn season six divisions showed changes (3 increasing and 3 decreasing) due to a gradual step change and the remaining two divisions observed changes due to trend change. Satisfactory precipitation estimates are obtained using the k-nn resampling technique. A 50% probability exceedence estimation error is computed for the selected climate divisions during the four seasons. It is seen that best estimates are obtained for summer season precipitation and worst for autumn season. As many as 18 climate divisions show an estimation error of 20% or less during summer, 14 divisions during spring, 11 divisions during winter and, 9 divisions during autumn. The analysis of seasonal changes and estimates of precipitation can help water managers in better management of water resources in the Colorado River Basin.
Effect of compression paddle tilt correction on volumetric breast density estimation
NASA Astrophysics Data System (ADS)
Kallenberg, Michiel G. J.; van Gils, Carla H.; Lokate, Mariëtte; den Heeten, Gerard J.; Karssemeijer, Nico
2012-08-01
For the acquisition of a mammogram, a breast is compressed between a compression paddle and a support table. When compression is applied with a flexible compression paddle, the upper plate may be tilted, which results in variation in breast thickness from the chest wall to the breast margin. Paddle tilt has been recognized as a major problem in volumetric breast density estimation methods. In previous work, we developed a fully automatic method to correct the image for the effect of compression paddle tilt. In this study, we investigated in three experiments the effect of paddle tilt and its correction on volumetric breast density estimation. Results showed that paddle tilt considerably affected accuracy of volumetric breast density estimation, but that effect could be reduced by tilt correction. By applying tilt correction, a significant increase in correspondence between mammographic density estimates and measurements on MRI was established. We argue that in volumetric breast density estimation, tilt correction is both feasible and essential when mammographic images are acquired with a flexible compression paddle.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capturerecapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on ecological distance, i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capturerecapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capturerecapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
An analytic model of toroidal half-wave oscillations: Implication on plasma density estimates
NASA Astrophysics Data System (ADS)
Bulusu, Jayashree; Sinha, A. K.; Vichare, Geeta
2015-06-01
The developed analytic model for toroidal oscillations under infinitely conducting ionosphere ("Rigid-end") has been extended to "Free-end" case when the conjugate ionospheres are infinitely resistive. The present direct analytic model (DAM) is the only analytic model that provides the field line structures of electric and magnetic field oscillations associated with the "Free-end" toroidal wave for generalized plasma distribution characterized by the power law ? = ?o(ro/r)m, where m is the density index and r is the geocentric distance to the position of interest on the field line. This is important because different regions in the magnetosphere are characterized by different m. Significant improvement over standard WKB solution and an excellent agreement with the numerical exact solution (NES) affirms validity and advancement of DAM. In addition, we estimate the equatorial ion number density (assuming H+ atom as the only species) using DAM, NES, and standard WKB for Rigid-end as well as Free-end case and illustrate their respective implications in computing ion number density. It is seen that WKB method overestimates the equatorial ion density under Rigid-end condition and underestimates the same under Free-end condition. The density estimates through DAM are far more accurate than those computed through WKB. The earlier analytic estimates of ion number density were restricted to m = 6, whereas DAM can account for generalized m while reproducing the density for m = 6 as envisaged by earlier models.
Estimation of tiger densities in India using photographic captures and recaptures
Karanth, U.; Nichols, J.D.
1998-01-01
Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Trap array configuration influences estimates and precision of black bear density and abundance.
Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L
2014-01-01
Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557
Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance
Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.
2014-01-01
Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557
Mid-latitude Ionospheric Storms Density Gradients, Winds, and Drifts Estimated from GPS TEC Imaging
NASA Astrophysics Data System (ADS)
Datta-Barua, S.; Bust, G. S.
2012-12-01
Ionospheric storm processes at mid-latitudes stand in stark contrast to the typical quiescent behavior. Storm enhanced density (SED) on the dayside affects continent-sized regions horizontally and are often associated with a plume that extends poleward and upward into the nightside. One proposed cause of this behavior is the sub-auroral polarization stream (SAPS) acting on the SED, and neutral wind effects. The electric field and its effect connecting mid-latitude and polar regions are just beginning to be understood and modeled. Another possible coupling effect is due to neutral winds, particularly those generated at high latitudes by joule heating effects. Of particular interest are electric fields and winds along the boundaries of the SED and plume, because these may be at least partly a cause of sharp horizontal electron density gradients. Thus, it is important to understand what bearing the drifts and winds, and any spatial variations in them (e.g., shear), have on the structure of the enhancement, particularly at its boundaries. Imaging techniques based on GPS TEC play a significant role in study of mid-latitude storm dynamics, particularly at mid-latitudes, where sampling of the ionosphere with ground-based GPS lines of sight is most dense. Ionospheric Data Assimilation 4-Dimensional (IDA4D) is a plasma density estimation algorithm that has been used in a number of scientific investigations over several years. Recently, efforts to estimate drivers of the mid-latitude ionosphere, focusing on electric-field-induced drifts and neutral winds, based on GPS TEC high-resolution imaging have shown promise. Estimating Ionospheric Parameters from Ionospheric Reverse Engineering (EMPIRE) is a tool developed that addresses this kind of investigation. In this work electron density and driver estimates are presented for an ionospheric storm using IDA4D in conjunction with EMPIRE. The IDA4D estimates resolve F-region electron densities at 1-degree resolution at the region of passage of the SED and associated plume. High-resolution imaging is used in conjunction with EMPIRE to deduce the dominant drivers. Starting with a baseline Weimer 2001 electric potential model, adjustments to the Weimer model are estimated for the given storm based on the IDA4D-derived densities to show electric fields associated with the plume. These regional densities and drivers are compared to CHAMP and DMSP data that are proximal for validation. Gradients in electron density are numerically computed over the 1-degree region. These density gradients are correlated with the drift estimates to identify a possible causal relationship in the formation of the boundaries of the SED.
Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès
2014-06-01
Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ∼100 ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ∼47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250
NASA Astrophysics Data System (ADS)
Lee, A. Y.; Lim, R. S.
2012-12-01
One of the major science objectives of the Cassini mission is an investigation of Titan's atmosphere constituent abundances. During low-altitude Titan flyby's, the spacecraft attitude is controlled by eight reaction thrusters. Thrusters are fired to counter the torque imparted on the spacecraft due to the Titan atmosphere. The denser the Titan's atmosphere is, the higher are the duty cycles of the thruster firings. Therefore thruster firing telemetry data collected during a passage through the Titan atmosphere could be used to estimate the atmospheric torques imparted on the spacecraft. Since there is a known relation between the atmospheric torque imparted on the spacecraft and the Titan's atmospheric density, the estimated atmospheric torque were used to reconstruct the Titan atmospheric density. In 2004-2012, forty-six low-altitude Titan flybys were executed. The altitudes of these flybys at Titan Closest Approach (TCA) range from 878 to 1174 km. The estimated Titan atmospheric densities, as functions of the spacecraft's Titan-relative altitude, were reconstructed. Results obtained are compared with those measured by the HASI (Huygens Atmospheric Structure Instrument) instrument on the Huygens probe. When the logarithm of the estimated density is plotted against the corresponding altitude, the data sets produce straight lines with negative slopes. This suggests that the atmospheric density (?_Titan) is related to the altitude (h) as follows: ?_Titan(h)=?0exp(-h/h0). In this equation, both ?_Titan and ?0 have units of kg/m3, and both h and h0 (scale height) have units of km. The least-square fit parameters [?0, h0] for the density estimates of forty-six low-altitude Titan flybys are given in this paper. There is an observed temporal variation of the Titan atmospheric density estimated using telemetry data of flyby executed in 2004-2012. The observed temporal variation of Titan atmospheric density is significant and couldn't be explained by the estimation uncertainty (5.8%, 1?) of the density reconstruction methodology. For example, the estimated Titan atmospheric densities at a constant altitude of 1,080 km are 3.68, 2.58, 3.13, 1.86, 1.48, 2.07, and 1.48e-10 kg/m3 based on flyby data collected in the years 2005, 2006, 2007, 2008, 2009, 2010, and 2012, respectively. Note that the Titan atmosphere density first decreased with time from 2005 to 2009, then it increased with time from 2009 to 2012. Factors that contributed to this temporal variation are unknown. On the other hand, there isn't any noticeable dependency of the Titan atmospheric density with the TCA latitudes of the flybys (from 82 deg. South to 85 deg. North). The estimated atmospheric density data will help scientists to better understand the density structure of the Titan atmosphere.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder
NASA Astrophysics Data System (ADS)
Schroth-Miller, Madeline L.
Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
A hierarchical model for estimating density in camera-trap studies
Royle, J. Andrew; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.
2009-01-01
1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.
Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A
2013-09-01
New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. PMID:23205694
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Hierarchical models for estimating density from DNA mark-recapture studies
Gardner, B.; Royle, J. Andrew; Wegan, M.T.
2009-01-01
Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.
Miller, Frederick J; Kaczmar, Swiatoslav W; Danzeisen, Ruth; Moss, Owen R
2013-12-01
Workplace air is monitored for overall dust levels and for specific components of the dust to determine compliance with occupational and workplace standards established by regulatory bodies for worker health protection. Exposure monitoring studies were conducted by the International Copper Association (ICA) at various industrial facilities around the world working with copper. Individual cascade impactor stages were weighed to determine the total amount of dust collected on the stage, and then the amounts of soluble and insoluble copper and other metals on each stage were determined; speciation was not determined. Filter samples were also collected for scanning electron microscope analysis. Retrospectively, there was an interest in obtaining estimates of alveolar lung burdens of copper in workers engaged in tasks requiring different levels of exertion as reflected by their minute ventilation. However, mechanistic lung dosimetry models estimate alveolar lung burdens based on particle Stoke's diameter. In order to use these dosimetry models the mass-based, aerodynamic diameter distribution (which was measured) had to be transformed into a distribution of Stoke's diameters, requiring an estimation be made of individual particle density. This density value was estimated by using cascade impactor data together with scanning electron microscopy data from filter samples. The developed method was applied to ICA monitoring data sets and then the multiple path particle dosimetry (MPPD) model was used to determine the copper alveolar lung burdens for workers with different functional residual capacities engaged in activities requiring a range of minute ventilation levels. PMID:24304308
Wavelet-based multiscale window transform and energy and vorticity analysis
NASA Astrophysics Data System (ADS)
Liang, Xiang San
A new methodology, Multiscale Energy and Vorticity Analysis (MS-EVA), is developed to investigate sub-mesoscale, meso-scale, and large-scale dynamical interactions in geophysical fluid flows which are intermittent in space and time. The development begins with the construction of a wavelet-based functional analysis tool, the multiscale window transform (MWT), which is local, orthonormal, self-similar, and windowed on scale. The MWT is first built over the real line then modified onto a finite domain. Properties are explored, the most important one being the property of marginalization which brings together a quadratic quantity in physical space with its phase space representation. Based on MWT the MS-EVA is developed. Energy and enstrophy equations for the large-, meso-, and sub-meso-scale windows are derived and their terms interpreted. The processes thus represented are classified into four categories: transport; transfer, conversion, and dissipation/diffusion. The separation of transport from transfer is made possible with the introduction of the concept of perfect transfer. By the property of marginalization, the classical energetic analysis proves to be a particular case of the MS-EVA. The MS-EVA developed is validated with classical instability problems. The validation is carried out through two steps. First, it is established that the barotropic and baroclinic instabilities are indicated by the spatial averages of certain transfer term interaction analyses. Then calculations of these indicators are made with an Eady model and a Kuo model. The results agree precisely with what is expected from their analytical solutions, and the energetics reproduced reveal a consistent and important aspect of the unknown dynamic structures of instability processes. As an application, the MS-EVA is used to investigate the Iceland-Faeroe frontal (IFF) variability. A MS-EVA-ready dataset is first generated, through a forecasting study with the Harvard Ocean Prediction System using the data gathered during the 1993 NRV Alliance cruise. The application starts with a determination of the scale window bounds, which characterize a double-peak structure in either the time wavelet spectrum or the space wavelet spectrum. The resulting energetics, when locally averaged, reveal that there is a clear baroclinic instability happening around the cold tongue intrusion observed in the forecast. Moreover, an interaction analysis shows that the energy released by the instability indeed goes to the meso-scale window and fuel the growth of the intrusion. The sensitivity study shows that, in this case, the key to a successful application is a correct decomposition of the large-scale window from the meso-scale window.
A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals
NASA Astrophysics Data System (ADS)
Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang
2014-05-01
The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.
Multiscale seismic characterization of marine sediments by using a wavelet-based approach
NASA Astrophysics Data System (ADS)
Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique
2015-04-01
We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lvy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare the results with the multiscale analysis performed on synthetic seismic data based on ground truth measurements.
Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.
Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap
2014-01-01
We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rm-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293
Autocorrelation-based estimate of particle image density in particle image velocimetry
NASA Astrophysics Data System (ADS)
Warner, Scott O.
In Particle Image Velocimetry (PIV), the number of particle images per interrogation region, or particle image density, impacts the strength of the correlation and, as a result, the number of valid vectors and the measurement uncertainty. Therefore, any a-priori estimate of the accuracy and uncertainty of PIV requires knowledge of the particle image density. An autocorrelation-based method for estimating the local, instantaneous, particle image density is presented. Synthetic images were used to develop an empirical relationship based on how the autocorrelation peak magnitude varies with particle image density, particle image diameter, illumination intensity, interrogation region size, and background noise. This relationship was then tested using images from two experimental setups with different seeding densities and flow media. The experimental results were compared to image densities obtained through using a local maximum method as well as manual particle counts and are found to be robust. The effect of varying particle image intensities was also investigated and is found to affect the particle image density.
Estimation of Density-Dependent Mortality of Juvenile Bivalves in the Wadden Sea
Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap
2014-01-01
We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293
Estimating food portions. Influence of unit number, meal type and energy density????
Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A.
2013-01-01
Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.726.8kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled meal, 4 drink and 16 snack. Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p=0.01). Single unit foods and those labelled as meal or beverage were estimated with greater error than multi-unit and snack foods (p=0.02 and p<0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 3046%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. PMID:23932948
Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.
2008-03-01
Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.
Change-point detection in time-series data by relative density-ratio estimation.
Liu, Song; Yamada, Makoto; Collier, Nigel; Sugiyama, Masashi
2013-07-01
The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on artificial and real-world datasets including human-activity sensing, speech, and Twitter messages, we demonstrate the usefulness of the proposed method. PMID:23500502
Estimating Densities of the Pest Halotydeus destructor (Acari: Penthaleidae) in Canola.
Arthur, Aston L; Hoffmann, Ary A; Umina, Paul A
2014-12-01
Development of sampling techniques to effectively estimate invertebrate densities in the field is essential for effective implementation of pest control programs, particularly when making informed spray decisions around economic thresholds. In this article, we investigated the influence of several factors to devise a sampling strategy to estimate Halotydeus destructor Tucker densities in a canola paddock. Direct visual counts were found to be the most suitable approach for estimating mite numbers, with higher densities detected than the vacuum sampling method. Visual assessments were impacted by the operator, sampling date, and time of day. However, with the exception of operator (more experienced operator detected higher numbers of mites), no obvious trends were detected. No patterns were found between H. destructor numbers and ambient temperature, relative humidity, wind speed, cloud cover, or soil surface conditions, indicating that these factors may not be of high importance when sampling mites during autumn and winter months. We show further support for an aggregated distribution of H. destructor within paddocks, indicating that a stratified random sampling program is likely to be most appropriate. Together, these findings provide important guidelines for Australian growers around the ability to effectively and accurately estimate H. destructor densities. PMID:26470087
BINOMIAL SAMPLING TO ESTIMATE CITRUS RUST MITE (ACARI: ERIOPHYIDAE) DENSITIES ON ORANGE FRUIT
Technology Transfer Automated Retrieval System (TEKTRAN)
Binomial sampling based on the proportion of samples infested was investigated as a method for estimating mean densities of citrus rust mites, Phyllocoptruta oleivora (Ashmead) and Aculops pelekassi (Keifer), on oranges. Data for the investigation were obtained by counting the number of motile mites...
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...
USING AERIAL HYPERSPECTRAL REMOTE SENSING IMAGERY TO ESTIMATE CORN PLANT STAND DENSITY
Technology Transfer Automated Retrieval System (TEKTRAN)
Since corn plant stand density is important for optimizing crop yield, several researchers have recently developed ground-based systems for automatic measurement of this crop growth parameter. Our objective was to use data from such a system to assess the potential for estimation of corn plant stan...
Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data
NASA Astrophysics Data System (ADS)
Wang, Yun; Lovelace, Geoffrey
2001-12-01
Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy, we assume a flat universe and that the weak energy condition is satisfied, and we allow the density of dark energy, ?X(z), to be an arbitrary function of redshift. Using simulated data from a space-based SN pencil-beam survey, we find that by optimizing the number of parameters used to parameterize the dimensionless dark energy density, f(z)=?X(z)/?X(z=0), we can obtain an unbiased estimate of both f(z) and the fractional matter density of the universe, ?m. A plausible SN pencil-beam survey (with a square degree field of view and for an observational duration of 1 yr) can yield about 2000 SNe Ia with 0<=z<=2. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of ?(mint)=0.16 mag. We find that for such an idealized survey, ?m can be measured to 10% accuracy, and the dark energy density can be estimated to ~20% to z~1.5, and ~20%-40% to z~2, depending on the time dependence of the true dark energy density. Dark energy densities that vary more slowly can be more accurately measured. For the anticipated Supernova/Acceleration Probe (SNAP) mission, ?m can be measured to 14% accuracy, and the dark energy density can be estimated to ~20% to z~1.2. Our results suggest that SNAP may gain much sensitivity to the time dependence of the dark energy density and ?m by devoting more observational time to the central pencil-beam fields to obtain more SNe Ia at z>1.2. We use both a maximum likelihood analysis and a Monte Carlo analysis (when appropriate) to determine the errors of estimated parameters. We find that the Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis.
Robel, G.L.; Fisher, W.L.
1999-01-01
Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Quantitative analysis for breast density estimation in low dose chest CT scans.
Moon, Woo Kyung; Lo, Chung-Ming; Goo, Jin Mo; Bae, Min Sun; Chang, Jung Min; Huang, Chiun-Sheng; Chen, Jeon-Hor; Ivanova, Violeta; Chang, Ruey-Feng
2014-03-01
A computational method was developed for the measurement of breast density using chest computed tomography (CT) images and the correlation between that and mammographic density. Sixty-nine asymptomatic Asian women (138 breasts) were studied. With the marked lung area and pectoralis muscle line in a template slice, demons algorithm was applied to the consecutive CT slices for automatically generating the defined breast area. The breast area was then analyzed using fuzzy c-mean clustering to separate fibroglandular tissue from fat tissues. The fibroglandular clusters obtained from all CT slices were summed then divided by the summation of the total breast area to calculate the percent density for CT. The results were compared with the density estimated from mammographic images. For CT breast density, the coefficient of variations of intraoperator and interoperator measurement were 3.00 % (0.59 %-8.52 %) and 3.09 % (0.20 %-6.98 %), respectively. Breast density measured from CT (22 ± 0.6 %) was lower than that of mammography (34 ± 1.9 %) with Pearson correlation coefficient of r=0.88. The results suggested that breast density measured from chest CT images correlated well with that from mammography. Reproducible 3D information on breast density can be obtained with the proposed CT-based quantification methods. PMID:24643751
Population density estimated from locations of individuals on a passive detector array.
Efford, Murray G; Dawson, Deanna K; Borchers, David L
2009-10-01
The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture-recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small. PMID:19886477
Surface estimates of the Atlantic overturning in density space in an eddy-permitting ocean model
NASA Astrophysics Data System (ADS)
Grist, Jeremy P.; Josey, Simon A.; Marsh, Robert
2012-06-01
A method to estimate the variability of the Atlantic meridional overturning circulation (AMOC) from surface observations is investigated using an eddy-permitting ocean-only model (ORCA-025). The approach is based on the estimate of dense water formation from surface density fluxes. Analysis using 78 years of two repeat forcing model runs reveals that the surface forcing-based estimate accounts for over 60% of the interannual AMOC variability in ?0 coordinates between 37N and 51N. The analysis provides correlations between surface-forced and actual overturning that exceed those obtained in an earlier analysis of a coarser resolution-coupled model. Our results indicate that, in accordance with theoretical considerations behind the method, it provides a better estimate of the overturning in density coordinates than in z coordinates in subpolar latitudes. By considering shorter segments of the model run, it is shown that correlations are particularly enhanced by the method's ability to capture large decadal scale AMOC fluctuations. The inclusion of the anomalous Ekman transport increases the amount of variance explained by an average 16% throughout the North Atlantic and provides the greatest potential for estimating the variability of the AMOC in density space between 33N and 54N. In that latitude range, 70-84% of the variance is explained and the root-mean-square difference is less than 1 Sv when the full run is considered.
Distributed Density Estimation Based on a Mixture of Factor Analyzers in a Sensor Network
Wei, Xin; Li, Chunguang; Zhou, Liang; Zhao, Li
2015-01-01
Distributed density estimation in sensor networks has received much attention due to its broad applicability. When encountering high-dimensional observations, a mixture of factor analyzers (MFA) is taken to replace mixture of Gaussians for describing the distributions of observations. In this paper, we study distributed density estimation based on a mixture of factor analyzers. Existing estimation algorithms of the MFA are for the centralized case, which are not suitable for distributed processing in sensor networks. We present distributed density estimation algorithms for the MFA and its extension, the mixture of Students t-factor analyzers (MtFA). We first define an objective function as the linear combination of local log-likelihoods. Then, we give the derivation process of the distributed estimation algorithms for the MFA and MtFA in details, respectively. In these algorithms, the local sufficient statistics (LSS) are calculated at first and diffused. Then, each node performs a linear combination of the received LSS from nodes in its neighborhood to obtain the combined sufficient statistics (CSS). Parameters of the MFA and the MtFA can be obtained by using the CSS. Finally, we evaluate the performance of these algorithms by numerical simulations and application example. Experimental results validate the promising performance of the proposed algorithms. PMID:26251903
Population density estimated from locations of individuals on a passive detector array
Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.
2009-01-01
The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.
NASA Astrophysics Data System (ADS)
Erkyihun, S. T.
2013-12-01
Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the simulated value for the current year. We applied this simulation approach on the climate indicators and streamflow at Lees Ferry, AZ in the Colorado River Basin, which is a key gauge on the river, using data from observational and paleo period together spanning 1650 - 2005. A suite of distributional statistics such as Probability Density Function (PDF), mean, variance, skew and lag-1 along with higher order and multi-decadal statistics such as spectra, drought and surplus statistics, are computed to check the performance of the flow simulation in capturing the variability of the historic and paleo periods. Our results indicate that this approach is able to generate robustly all of the above mentioned statistical properties. This offers an attractive alternative for near term (interannual to multi-decadal) flow simulation that is critical for water resources planning.
Estimations of population density for selected periods between the Neolithic and AD 1800.
Zimmermann, Andreas; Hilpert, Johanna; Wendt, Karl Peter
2009-04-01
Abstract We describe a combination of methods applied to obtain reliable estimations of population density using archaeological data. The combination is based on a hierarchical model of scale levels. The necessary data and methods used to obtain the results are chosen so as to define transfer functions from one scale level to another. We apply our method to data sets from western Germany that cover early Neolithic, Iron Age, Roman, and Merovingian times as well as historical data from AD 1800. Error margins and natural and historical variability are discussed. Our results for nonstate societies are always lower than conventional estimations compiled from the literature, and we discuss the reasons for this finding. At the end, we compare the calculated local and global population densities with other estimations from different parts of the world. PMID:19943751
NASA Astrophysics Data System (ADS)
Park, J.; Lhr, H.; Stolle, C.; Malhotra, G.; Baker, J. B. H.; Buchert, S.; Gill, R.
2015-07-01
Plasma convection in the high-latitude ionosphere provides important information about magnetosphere-ionosphere-thermosphere coupling. In this study we estimate the along-track component of plasma convection within and around the polar cap, using electron density profiles measured by the three Swarm satellites. The velocity values estimated from the two different satellite pairs agree with each other. In both hemispheres the estimated velocity is generally anti-sunward, especially for higher speeds. The obtained velocity is in qualitative agreement with Super Dual Auroral Radar Network data. Our method can supplement currently available instruments for ionospheric plasma velocity measurements, especially in cases where these traditional instruments suffer from their inherent limitations. Also, the method can be generalized to other satellite constellations carrying electron density probes.
Estimating effective data density in a satellite retrieval or an objective analysis
NASA Technical Reports Server (NTRS)
Purser, R. J.; Huang, H.-L.
1993-01-01
An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.
Density estimation of small-mammal populations using a trapping web and distance sampling methods
Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.
1983-01-01
Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.
Reader Variability in Breast Density Estimation from Full-Field Digital Mammograms
Keller, Brad M.; Nathan, Diane L.; Gavenonis, Sara C.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina
2013-01-01
Rationale and Objectives Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, for processing) or vendor postprocessed (ie, for presentation) digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual womans risk for breast cancer. Materials and Methods Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. Results The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Conclusion Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect. PMID:23465381
Estimating absolute salinity (SA) in the world's oceans using density and composition
NASA Astrophysics Data System (ADS)
Woosley, Ryan J.; Huang, Fen; Millero, Frank J.
2014-11-01
The practical (Sp) and reference (SR) salinities do not account for variations in physical properties such as density and enthalpy. Trace and minor components of seawater, such as nutrients or inorganic carbon affect these properties. This limitation has been recognized and several studies have been made to estimate the effect of these compositional changes on the conductivity-density relationship. These studies have been limited in number and geographic scope. Here, we combine the measurements of previous studies with new measurements for a total of 2857 conductivity-density measurements, covering all of the world's major oceans, to derive empirical equations for the effect of silica and total alkalinity on the density and absolute salinity of the global oceans and to recommend an equation applicable to most of the world's oceans. The potential impact on salinity as a result of uptake of anthropogenic CO2 is also discussed.
Analysis of percent density estimates from digital breast tomosynthesis projection images
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.
2007-03-01
Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PD M) and from individual DBT projections (PD T). We observed good agreement between PD M and PD T from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PD T estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.
NASA Astrophysics Data System (ADS)
Semler, Lindsay; Dettori, Lucia
The research presented in this article is aimed at developing an automated imaging system for classification of tissues in medical images obtained from Computed Tomography (CT) scans. The article focuses on using multi-resolution texture analysis, specifically: the Haar wavelet, Daubechies wavelet, Coiflet wavelet, and the ridgelet. The algorithm consists of two steps: automatic extraction of the most discriminative texture features of regions of interest and creation of a classifier that automatically identifies the various tissues. The classification step is implemented using a cross-validation Classification and Regression Tree approach. A comparison of wavelet-based and ridgelet-based algorithms is presented. Tests on a large set of chest and abdomen CT images indicate that, among the three wavelet-based algorithms, the one using texture features derived from the Haar wavelet transform clearly outperforms the one based on Daubechies and Coiflet transform. The tests also show that the ridgelet-based algorithm is significantly more effective and that texture features based on the ridgelet transform are better suited for texture classification in CT medical images.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Zhang, Haijiang
2015-10-01
It has been a challenge to image velocity changes in real time by seismic travel time tomography. If more seismic events are included in the tomographic system, the inverted velocity models do not have necessary time resolution to resolve velocity changes. But if fewer events are used for real-time tomography, the system is less stable and the inverted model may contain some artifacts, and thus, resolved velocity changes may not be real. To mitigate these issues, we propose a wavelet-based time-dependent double-difference (DD) tomography method. The new method combines the multiscale property of wavelet representation and the fast converging property of the simultaneous algebraic reconstruction technique to solve the velocity models at multiple scales for sequential time segments. We first test the new method using synthetic data constructed using real event and station distribution for Mount Etna volcano in Italy. Then we show its effectiveness to determine velocity changes for the 2001 and 2002 eruptions of Mount Etna volcano. Compared to standard DD tomography that uses seismic events from a longer time period, wavelet-based time-dependent tomography better resolves velocity changes that may be caused by fracture closure and opening as well as fluid migration before and after volcano eruptions.
NASA Astrophysics Data System (ADS)
Fadili, Jalal M.; Bullmore, Edward T.
2003-11-01
Wavelet-based methods for multiple hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasize convergence between methods of wavelet thresholding or shrinkage and the problem of multiple hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on ensuring a trade off between type I probability error control and power dissipation. We describe a technique for controlling the false discovery rate at an arbitrary level of type 1 error in testing multiple wavelet coefficients generated by a 2D discrete wavelet transform (DWT) of spatial maps of {fMRI} time series statistics. We also describe and apply recursive testing methods that can be used to define a threshold unique to each level and orientation of the 2D-DWT. Bayesian methods, incorporating a formal model for the anticipated sparseness of wavelet coefficients representing the signal or true image, are also tractable. These methods are comparatively evaluated by analysis of "null" images (acquired with the subject at rest), in which case the number of positive tests should be exactly as predicted under the hull hypothesis, and an experimental dataset acquired from 5 normal volunteers during an event-related finger movement task. We show that all three wavelet-based methods of multiple hypothesis testing have good type 1 error control (the FDR method being most conservative) and generate plausible brain activation maps.
Yee Lau, Phooi; Ozawa, Shinji
2005-01-01
The objective of this paper is to present a secure distribution method to distribute healthcare records (e.g. video streams and digitized image scans). The availability of prompt and expert medical care can meaningfully improve health care services in understaffed rural and remote areas, sharing of available facilities, and medical records referral. Here, a secure method is developed for distributing healthcare records, using a two-step wavelet based technique; first, a 2-level db8 wavelets transform for textual elimination, and later a 4-level db8 wavelets transform for digital watermarking. The first db8 wavelets are used to detect and eliminate textual information found on images for protecting data privacy and confidentiality. The second db8 wavelets are to secure and impose imperceptible marks to identify the owner; track authorized users, or detects malicious tampering of documents. Experiments were performed on different digitized image scans. The experimental results have illustrated that both wavelet-based methods are conceptually simple and able to effectively detect textual information while our watermark technique is robust to noise and compression. PMID:17282675
NASA Astrophysics Data System (ADS)
Rastigejev, Y.; Semakin, A. N.
2013-12-01
Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.
Fracture density estimates in glaciogenic deposits from P-wave velocity reductions
Karaman, A.; Carpenter, P.J.
1997-01-01
Subsidence-induced fracturing of glaciogenic deposits over coal mines in the southern Illinois basis alters hydraulic properties of drift aquifers and exposes these aquifers to surface contaminants. In this study, refraction tomography surveys were used in conjunction with a generalized form of a seismic fracture density model to estimate the vertical and lateral extent of fracturing in a 12-m thick overburden of loess, clay, glacial till, and outwash above a longwall coal mine at 90 m depth. This generalized model accurately predicted fracture trends and densities from azimuthal P-wave velocity variations over unsaturated single- and dual-parallel fractures exposed at the surface. These fractures extended at least 6 m and exhibited 10--15 cm apertures at the surface. The pre- and postsubsidence velocity ratios were converted into fracture densities that exhibited qualitative agreement with the observed surface and inferred subsurface fracture distribution. Velocity reductions as large as 25% were imaged over the static tension zone of the mine where fracturing may extend to depths of 10--15 m. Finally, the seismically derived fracture density estimates were plotted as a function of subsidence-induced drawdown across the panel to estimate the average specific storage of the sand and gravel lower drift aquifer. This value was at least 20 times higher than the presubsidence (unfractured) specific storage for the same aquifer.
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Estimation of the density of the clay-organic complex in soil
NASA Astrophysics Data System (ADS)
Czyż, Ewa A.; Dexter, Anthony R.
2016-01-01
Soil bulk density was investigated as a function of soil contents of clay and organic matter in arable agricultural soils at a range of locations. The contents of clay and organic matter were used in an algorithmic procedure to calculate the amounts of clay-organic complex in the soils. Values of soil bulk density as a function of soil organic matter content were used to estimate the amount of pore space occupied by unit amount of complex. These estimations show that the effective density of the clay-organic matter complex is very low with a mean value of 0.17 ± 0.04 g ml-1 in arable soils. This value is much smaller than the soil bulk density and smaller than any of the other components of the soil considered separately (with the exception of the gas content). This low value suggests that the clay-soil complex has an extremely porous and open structure. When the complex is considered as a separate phase in soil, it can account for the observed reduction of bulk density with increasing content of organic matter.
Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand
2010-01-01
We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876
Ferretti, M; Brambilla, E; Brunialti, G; Fornasier, F; Mazzali, C; Giordani, P; Nimis, P L
2004-01-01
Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. PMID:14568724
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Zhang, Minling; Crocker, Robert L; Mankin, Richard W; Flanders, Kathy L; Brandhorst-Hubbard, Jamee L
2003-12-01
Incidental sounds produced by Phyllophaga crinita (Burmeister) and Cyclocephala lurida (Bland) (Coleoptera: Scarabaeidae) white grubs were monitored with single- and multiple-sensor acoustic detection systems in turf fields and golf course fairways in Texas. The maximum detection range of an individual acoustic sensor was measured in a greenhouse as approximately the area enclosed in a 26.5-cm-diameter perimeter (552 cm2). A single-sensor acoustic system was used to rate the likelihood of white grub infestation at monitored sites, and a four-sensor array was used to count the numbers of white grubs at sites where infestations were identified. White grub population densities were acoustically estimated by dividing the estimated numbers of white grubs by the area of the detection range. For comparisons with acoustic monitoring methods, infestations were assessed also by examining 10-cm-diameter soil cores collected with a standard golf cup-cutter. Both acoustic and cup-cutter assessments of infestation and estimates of white grub population densities were verified by excavation and sifting of the soil around the sensors after each site was monitored. The single-sensor acoustic method was more successful in assessing infestations at a recording site than was the cup-cutter method, possibly because the detection range was larger than the area of the soil core. White grubs were recovered from >90% of monitored sites rated at medium or high likelihood of infestation. Infestations were successfully identified at 23 of the 24 sites where white grubs were recovered at densities >50/m2, the threshold for economic damage. The four-sensor array yielded the most accurate estimates of the numbers of white grubs in the detection range, enabling reliable, nondestructive estimation of white grub population densities. However, tests with the array took longer and were more difficult to perform than tests with the single sensor. PMID:14977114
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.
Singular value decomposition and density estimation for filtering and analysis of gene expression
Rechtsteiner, A.; Gottardo, R.; Rocha, L. M.; Wall, M. E.
2003-01-01
We present three algorithms for gene expression analysis. Algorithm 1, known as serial correlation test, is used for filtering out noisy gene expression profiles. Algorithm 2 and 3 project the gene expression profiles into 2-dimensional expression subspaces ident ifiecl by Singular Value Decomposition. Density estimates a e used to determine expression profiles that have a high correlation with the subspace and low levels of noise. High density regions in the projection, clusters of co-expressed genes, are identified. We illustrate the algorithms by application to the yeast cell-cycle data by Cho et.al. and comparison of the results.
Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe
2014-01-01
Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320320 m, 160160 m and 6464 m at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Somershoe, S.G.; Twedt, D.J.; Reid, B.
2006-01-01
We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.
Moment series for moment estimators of the parameters of a Weibull density
Bowman, K.O.; Shenton, L.R.
1982-01-01
Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.
Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type
NASA Astrophysics Data System (ADS)
Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.
2016-01-01
Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international data sets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon type bulk densities using local known bulk density data sets. Then the best performing of the pedotransfer functions were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data were missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known data sets had the best predictions, using the recalibrated and validated pedotransfer functions.
A method for estimating the height of a mesospheric density level using meteor radar
NASA Astrophysics Data System (ADS)
Younger, J. P.; Reid, I. M.; Vincent, R. A.; Murphy, D. J.
2015-07-01
A new technique for determining the height of a constant density surface at altitudes of 78-85 km is presented. The first results are derived from a decade of observations by a meteor radar located at Davis Station in Antarctica and are compared with observations from the Microwave Limb Sounder instrument aboard the Aura satellite. The density of the neutral atmosphere in the mesosphere/lower thermosphere region around 70-110 km is an essential parameter for interpreting airglow-derived atmospheric temperatures, planning atmospheric entry maneuvers of returning spacecraft, and understanding the response of climate to different stimuli. This region is not well characterized, however, due to inaccessibility combined with a lack of consistent strong atmospheric radar scattering mechanisms. Recent advances in the analysis of detection records from high-performance meteor radars provide new opportunities to obtain atmospheric density estimates at high time resolutions in the MLT region using the durations and heights of faint radar echoes from meteor trails. Previous studies have indicated that the expected increase in underdense meteor radar echo decay times with decreasing altitude is reversed in the lower part of the meteor ablation region due to the neutralization of meteor plasma. The height at which the gradient of meteor echo decay times reverses is found to occur at a fixed atmospheric density. Thus, the gradient reversal height of meteor radar diffusion coefficient profiles can be used to infer the height of a constant density level, enabling the observation of mesospheric density variations using meteor radar.
Electron density estimation in cold magnetospheric plasmas with the Cluster Active Archive
NASA Astrophysics Data System (ADS)
Masson, A.; Pedersen, A.; Taylor, M. G.; Escoubet, C. P.; Laakso, H. E.
2009-12-01
Electron density is a key physical quantity to characterize any plasma medium. Its measurement is thus essential to understand the various physical processes occurring in the environment of a magnetized planet. However, any magnetosphere of the solar system is far from being an homogeneous medium with a constant electron density and temperature. For instance, the Earths magnetosphere is composed of a variety of regions with densities and temperatures spanning over at least 6 decades of magnitude. For this reason, different types of scientific instruments are usually carried onboard a magnetospheric spacecraft to estimate in situ the electron density of the various plasma regions crossed by different means. In the case of the European Space Agency Cluster mission, five different instruments on each of its four identical spacecraft can be used to estimate it: two particle instruments, a DC electric field instrument, a relaxation sounder and a high-time resolution passive wave receiver. Each of these instruments has its pros and cons depending on the plasma conditions. The focus of this study is the accurate estimation of the electron density in cold plasma regions of the magnetosphere including the magnetotail lobes (Ne ? 0.01 e-/cc, Te ~ 100 eV) and the plasmasphere (Ne> 10 e-/cc, Te <10 eV). In these regions, particle instruments can be blind to low energy ions outflowing from the ionosphere or measuring only a portion of the energy range of the particles due to photoelectrons. This often results in an under estimation of the bulk density. Measurements from a relaxation sounder enables accurate estimation of the bulk electron density above a fraction of 1 e-/cc but requires careful calibration of the resonances and/or the cutoffs detected. On Cluster, active soundings enable to derive precise density estimates between 0.2 and 80 e-/cc every minute or two. Spacecraft-to-probe difference potential measurements from a double probe electric field experiment can be calibrated against the above mentionned types of measurements to derive bulk electron densities with a time resolution below 1 s. Such an in-flight calibration procedure has been performed successfully on past magnetospheric missions such as GEOS, ISEE-1, Viking, Geotail, CRRES or FAST. We will first present the outcome of this calibration procedure for the Cluster mission for plasma conditions encountered in the plasmasphere, the magnetotail lobes and the polar caps. This study is based on the use of the Cluster Active Archive (CAA) for data collected in the plasmasphere. CAA offers the unique possibility to easily access the best calibrated data collected by all experiments on the Cluster satellites over their several years in orbit. This has enabled in particular to take into account the impact of the solar activity in the calibration procedure. Recent science nuggets based on these calibrated data will then be presented showing in particular the outcome of the three dimensional (3D) electron density mapping of the magnetotail lobes over several years.
Stewart, Robert N; White, Devin A; Urban, Marie L; Morton, April M; Webster, Clayton G; Stoyanov, Miroslav K; Bright, Eddie A; Bhaduri, Budhendra L
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals
Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew
2011-01-01
Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.
Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals.
Kry, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J Andrew
2011-04-01
Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals. PMID:21166714
Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi
2015-01-01
Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challenging in high-dimensional space. A naive approach to coping with high dimensionality is to first perform dimensionality reduction (DR) and then execute CDE. However, a two-step process does not perform well in practice because the error incurred in the first DR step can be magnified in the second CDE step. In this letter, we propose a novel single-shot procedure that performs CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR as the problem of minimizing a squared-loss variant of conditional entropy, and this is solved using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the usefulness of the proposed method through extensive experiments on various data sets, including humanoid robot transition and computer art. PMID:25380340
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Mgelhj, Andreas; Petzold, Vivien; Landis, David D.; Nrskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.
2012-06-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.
Kernel density estimation and K-means clustering to profile road accident hotspots.
Anderson, Tessa K
2009-05-01
Identifying road accident hotspots is a key role in determining effective strategies for the reduction of high density areas of accidents. This paper presents (1) a methodology using Geographical Information Systems (GIS) and Kernel Density Estimation to study the spatial patterns of injury related road accidents in London, UK and (2) a clustering methodology using environmental data and results from the first section in order to create a classification of road accident hotspots. The use of this methodology will be illustrated using the London area in the UK. Road accident data collected by the Metropolitan Police from 1999 to 2003 was used. A kernel density estimation map was created and subsequently disaggregated by cell density to create a basic spatial unit of an accident hotspot. Appended environmental data was then added to the hotspot cells and using K-means clustering, an outcome of similar hotspots was deciphered. Five groups and 15 clusters were created based on collision and attribute data. These clusters are discussed and evaluated according to their robustness and potential uses in road safety campaigning. PMID:19393780
Examining the impact of the precision of address geocoding on estimated density of crime locations
NASA Astrophysics Data System (ADS)
Harada, Yutaka; Shimada, Takahito
2006-10-01
This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.
Verdoolaege, G.; Oost, G. van; Hellermann, M. G. von; Jaspers, R.; Ichir, M. M.
2006-11-29
The validation of diagnostic date from a nuclear fusion experiment is an important issue. The concept of an Integrated Data Analysis (IDA) allows the consistent estimation of plasma parameters from heterogeneous data sets. Here, the determination of the ion effective charge (Zeff) is considered. Several diagnostic methods exist for the determination of Zeff, but the results are in general not in agreement. In this work, the problem of Zeff estimation on the TEXTOR tokamak is approached from the perspective of IDA, in the framework of Bayesian probability theory. The ultimate goal is the estimation of a full Zeff profile that is consistent both with measured bremsstrahlung emissivities, as well as individual impurity spectral line intensities obtained from Charge Exchange Recombination Spectroscopy (CXRS). We present an overview of the various uncertainties that enter the calculation of a Zeff profile from bremsstrahlung date on the one hand, and line intensity data on the other hand. We discuss a simple linear and nonlinear Bayesian model permitting the estimation of a central value for Zeff and the electron density ne on TEXTOR from bremsstrahlung emissivity measurements in the visible, and carbon densities derived from CXRS. Both the central Zeff and ne are sampled using an MCMC algorithm. An outlook is given towards possible model improvements.
Validation tests of an improved kernel density estimation method for identifying disease clusters
NASA Astrophysics Data System (ADS)
Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra
2012-07-01
The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.
Density-dependent analysis of nonequilibrium paths improves free energy estimates
Minh, David D. L.
2009-01-01
When a system is driven out of equilibrium by a time-dependent protocol that modifies the Hamiltonian, it follows a nonequilibrium path. Samples of these paths can be used in nonequilibrium work theorems to estimate equilibrium quantities such as free energy differences. Here, we consider analyzing paths generated with one protocol using another one. It is posited that analysis protocols which minimize the lag, the difference between the nonequilibrium and the instantaneous equilibrium densities, will reduce the dissipation of reprocessed trajectories and lead to better free energy estimates. Indeed, when minimal lag analysis protocols based on exactly soluble propagators or relative entropies are applied to several test cases, substantial gains in the accuracy and precision of estimated free energy differences are observed. PMID:19485432
Improved Estimation of Density of States for Monte Carlo Sampling via MBAR.
Xu, Yuanwei; Rodger, P Mark
2015-10-13
We present a new method to calculate the density of states using the multistate Bennett acceptance ratio (MBAR) estimator. We use a combination of parallel tempering (PT) and multicanonical simulation to demonstrate the efficiency of our method in a statistical model of sampling from a two-dimensional normal mixture and also in a physical model of aggregation of lattice polymers. While MBAR has been commonly used for final estimation of thermodynamic properties, our numerical results show that the efficiency of estimation with our new approach, which uses MBAR as an intermediate step, often improves upon conventional use of MBAR. We also demonstrate that it can be beneficial in our method to use full PT samples for MBAR calculations in cases where simulation data exhibit long correlation. PMID:26574248
Estimation of scattering phase function utilizing laser Doppler power density spectra.
Wojtkiewicz, S; Liebert, A; Rix, H; Sawosz, P; Maniewski, R
2013-02-21
A new method for the estimation of the light scattering phase function of particles is presented. The method allows us to measure the light scattering phase function of particles of any shape in the full angular range (0°-180°) and is based on the analysis of laser Doppler (LD) power density spectra. The theoretical background of the method and results of its validation using data from Monte Carlo simulations will be presented. For the estimation of the scattering phase function, a phantom measurement setup is proposed containing a LD measurement system and a simple model in which a liquid sample flows through a glass tube fixed in an optically turbid material. The scattering phase function estimation error was thoroughly investigated in relation to the light scattering anisotropy factor g. The error of g estimation is lower than 10% for anisotropy factors larger than 0.5 and decreases with increase of the anisotropy factor (e.g. for g = 0.98, the error of estimation is 0.01%). The analysis of influence of the noise in the measured LD spectrum showed that the g estimation error is lower than 1% for signal to noise ratio higher than 50 dB. PMID:23340453
Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type
NASA Astrophysics Data System (ADS)
Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.
2015-10-01
Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international datasets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon types using local known bulk density datasets. Then the best performing of the pedotransfer functions, were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data was missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known datasets had the best predictions, using the recalibrated and validated pedotransfer functions.
Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie
2014-01-01
Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ∼3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122
Chestnut, Tara E.; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie
2014-01-01
Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time.
Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie
2014-01-01
Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122
Crowe, D.E.; Longshore, K.M.
2010-01-01
We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.
NASA Astrophysics Data System (ADS)
Sondhiya, Deepak Kumar; Gwal, Ashok Kumar; Verma, Shivali; Kasde, Satish Kumar
Abstract: In this paper, a wavelet-based neural network system for the detection and identification of four types of VLF whistlers transients (i.e. dispersive, diffuse, spiky and multipath) is implemented and tested. The discrete wavelet transform (DWT) technique is integrated with the feed forward neural network (FFNN) model to construct the identifier. First, the multi-resolution analysis (MRA) technique of DWT and the Parsevals theorem are employed to extract the characteristics features of the transients at different resolution levels. Second, the FFNN identifies these extracted features to identify the transients according to the features extracted. The proposed methodology can reduce a great quantity of the features of transients without losing its original property; less memory space and computing time are required. Various transient events are tested; the results show that the identifier can detect whistler transients efficiently. Keywords: Discrete wavelets transform, Multi-resolution analysis, Parsevals theorem and Feed forward neural network
Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing
2014-11-01
Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315
Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing
2014-01-01
Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315
Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers
NASA Astrophysics Data System (ADS)
Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.
2013-12-01
The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.
A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis
NASA Astrophysics Data System (ADS)
Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.
2014-09-01
This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.
NASA Astrophysics Data System (ADS)
Zamani, Ahmad; Kolahi Azar, Amir; Safavi, Ali
2014-06-01
This paper presents a wavelet-based multifractal approach to characterize the statistical properties of temporal distribution of the 1982-2012 seismic activity in Mammoth Mountain volcano. The fractal analysis of time-occurrence series of seismicity has been carried out in relation to seismic swarm in association with magmatic intrusion happening beneath the volcano on 4 May 1989. We used the wavelet transform modulus maxima based multifractal formalism to get the multifractal characteristics of seismicity before, during, and after the unrest. The results revealed that the earthquake sequences across the study area show time-scaling features. It is clearly perceived that the multifractal characteristics are not constant in different periods and there are differences among the seismicity sequences. The attributes of singularity spectrum have been utilized to determine the complexity of seismicity for each period. Findings show that the temporal distribution of earthquakes for swarm period was simpler with respect to pre- and post-swarm periods.
NASA Astrophysics Data System (ADS)
Zhong, Junmei; Ning, Ruola; Conover, David L.
2004-05-01
The real-time flat panel detector-based cone beam CT breast imaging (FPD-CBCTBI) has attracted increasing attention for its merits of early detection of small breast cancerous tumors, 3-D diagnosis, and treatment planning with glandular dose levels not exceeding those of conventional film-screen mammography. In this research, our motivation is to further reduce the x-ray exposure level for the cone beam CT scan while retaining acceptable image quality for medical diagnosis by applying efficient denoising techniques. In this paper, the wavelet-based multiscale anisotropic diffusion algorithm is applied for cone beam CT breast imaging denoising. Experimental results demonstrate that the denoising algorithm is very efficient for cone bean CT breast imaging for noise reduction and edge preservation. The denoising results indicate that in clinical applications of the cone beam CT breast imaging, the patient"s radiation dose can be reduced by up to 60% while obtaining acceptable image quality for diagnosis.
Hariharan, G
2014-05-01
In this paper, a wavelet-based approximation method is introduced for solving the Newell-Whitehead (NW) and Allen-Cahn (AC) equations. To the best of our knowledge, until now there is no rigorous Legendre wavelets solution has been reported for the NW and AC equations. The highest derivative in the differential equation is expanded into Legendre series, this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the aforesaid equations are converted into an algebraic system. Block pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24599524
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845
Boschetto, D; Mirzaei, H; Leong, R W L; Grisan, E
2015-08-01
Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic computer-based method for the detection of goblet cells from confocal endoscopy images, whose density changes in case of pathological tissue. After a manual selection of a suitable region of interest, the candidate columnar and goblet cells' centers are first detected and the cellular architecture is estimated from their position using a Voronoi diagram. The region within each Voronoi cell is then analyzed and classified as goblet cell or other. The results suggest that our method is able to detect and label goblet cells immersed in a columnar epithelium in a fast, reliable and automatic way. Accepting 0.44 false positives per image, we obtain a sensitivity value of 90.3%. Furthermore, estimated and real goblet cell densities are comparable (error: 9.7 16.9%, correlation: 87.2%, R(2) = 76%). PMID:26737720
Interference by pigment in the estimation of microalgal biomass concentration by optical density.
Griffiths, Melinda J; Garcin, Clive; van Hille, Robert P; Harrison, Susan T L
2011-05-01
Optical density is used as a convenient indirect measurement of biomass concentration in microbial cell suspensions. Absorbance of light by a suspension can be related directly to cell density using a suitable standard curve. However, inaccuracies can be introduced when the pigment content of the cells changes. Under the culture conditions used, pigment content of the microalga Chlorella vulgaris varied between 0.5 and 5.5% of dry weight with age and culture conditions. This led to significant errors in biomass quantification over the course of a growth cycle, due to the change in absorbance. Using a standard curve generated at a single time point in the growth cycle to calculate dry weight (dw) from optical density led to average relative errors across the growth cycle, relative to actual dw, of between 9 and 18% at 680 nm and 5 and 13% at 750 nm. When a standard curve generated under low pigment conditions was used to estimate biomass under normal pigment conditions, average relative errors in biomass estimation relative to actual dw across the growth cycle were 52% at 680 nm and 25% at 750 nm. Similar results were found with Scenedesmus, Spirulina and Nannochloropsis. Suggested strategies to minimise error include selection of a wavelength that minimises absorbance by the pigment, e.g. 750 nm where chlorophyll is the dominant pigment, and generation of a standard curve towards the middle, or across the entire, growth cycle. PMID:21329736
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694
Boersen, M.R.; Clark, J.D.; King, T.L.
2003-01-01
The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.
Density-based load estimation using two-dimensional finite element models: a parametric study.
Bona, Max A; Martin, Larry D; Fischer, Kenneth J
2006-08-01
A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530
NASA Astrophysics Data System (ADS)
Zelensky, A. A.; Kravchenko, V. F.; Pavlikov, V. V.; Pustovoit, V. I.; Totsky, A. V.
2014-07-01
The methods of smoothing of the bispectral density estimate when solving the problems of restoration of signals with an unknown shape in the interference environment and random signal delay are considered for the first time. The performed analysis of statistical characteristics of noise that presents in the bispectrum estimate shows that these statistical characteristics have a rather complex unsteady behavior. An ambiguous selection of a filter optimal by the criterion of the minimum of the root-mean-square error and the minimum of dynamic distortions introduced by the filter seems to be problematic because of the unsteady behavior of counts of the bispectral density estimate and the absence of a priori data on the parameters of the restored signal. Therefore, statistic investigations were performed with the use of linear and nonlinear digital filters with variations of the sliding window sizes. It is shown that the advantages of the proposed approach most pronouncedly manifest themselves with the use of the nonlinear digital filtration and small signal/noise ratios at the input and/or with a small sampling volume of observed implementations. The Kravchenko weight functions are proposed to smooth the bispectrum of the multifrequency signal with a large dynamic range of variations in amplitudes of spectral components. The presented results are of practical interest for use in applications such as radiolocation, hydrolocation, and digital communication.
NASA Astrophysics Data System (ADS)
Waters, Daniel F.; Cadou, Christopher P.
2014-02-01
A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.
Estimating Absolute Salinity (SA) in the World's Oceans Using Density and Composition
NASA Astrophysics Data System (ADS)
Woosley, R. J.; Huang, F.; Millero, F. J., Jr.
2014-12-01
The practical salinity (Sp), which is determined by the relationship of conductivity to the known proportions of the major components of seawater, and reference salinity (SR = (35.16504/35)*Sp), do not account for variations in physical properties such as density and enthalpy. Trace and minor components of seawater, such as nutrients or inorganic carbon and total alkalinity affect these properties and contribute to the absolute salinity (SA). This limitation has been recognized and several studies have been made to estimate the effect of these compositional changes on the conductivity-density relationship. These studies have been limited in number and geographic scope. Here, we combine the measurements of previous studies with new measurements for a total of 2,857 conductivity-density measurements covering all of the world's major oceans to derive empirical equations for the effect of silica and total alkalinity on the density and absolute salinity of the global oceans and recommend an equation applicable to most of the world oceans. The potential impact on salinity as a result of uptake of anthropogenic CO2 is also discussed.
Estimation of dislocation density from precession electron diffraction data using the Nye tensor.
Leff, A C; Weinberger, C R; Taheri, M L
2015-06-01
The Nye tensor offers a means to estimate the geometrically necessary dislocation density of a crystalline sample based on measurements of the orientation changes within individual crystal grains. In this paper, the Nye tensor theory is applied to precession electron diffraction automated crystallographic orientation mapping (PED-ACOM) data acquired using a transmission electron microscope (TEM). The resulting dislocation density values are mapped in order to visualize the dislocation structures present in a quantitative manner. These density maps are compared with other related methods of approximating local strain dependencies in dislocation-based microstructural transitions from orientation data. The effect of acquisition parameters on density measurements is examined. By decreasing the step size and spot size during data acquisition, an increasing fraction of the dislocation content becomes accessible. Finally, the method described herein is applied to the measurement of dislocation emission during in situ annealing of Cu in TEM in order to demonstrate the utility of the technique for characterizing microstructural dynamics. PMID:25697461
Simple method to estimate MOS oxide-trap, interface-trap, and border-trap densities
Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R.
1993-09-01
Recent work has shown that near-interfacial oxide traps that communicates with the underlaying Si (``border traps``) can play a significant role in determining MOS radiation response and long-term reliability. Thermally-stimulated-current 1/f noise, and frequency-dependent charge-pumping measurements have been used to estimate border-trap densities in MOS structures. These methods all require high-precision, low-noise measurements that are often difficult to perform and interpret. In this summary, we describe a new dual-transistor method to separate bulk-oxide-trap, interface-trap, and border-trap densities in irradiated MOS transistors that requires only standard threshold-voltage and high-frequency charge-pumping measurements.
A maximum volume density estimator generalized over a proper motion-limited sample
NASA Astrophysics Data System (ADS)
Lam, Marco C.; Rowell, Nicholas; Hambly, Nigel C.
2015-07-01
The traditional Schmidt density estimator has been proven to be unbiased and effective in a magnitude-limited sample. Previously, efforts have been made to generalize it for populations with non-uniform density and proper motion-limited cases. This work shows that the then-good assumptions for a proper motion-limited sample are no longer sufficient to cope with modern data. Populations with larger differences in the kinematics as compared to the local standard of rest are most severely affected. We show that this systematic bias can be removed by treating the discovery fraction inseparable from the generalized maximum volume integrand. The treatment can be applied to any proper motion-limited sample with good knowledge of the kinematics. This work demonstrates the method through application to a mock catalogue of a white dwarf-only solar neighbourhood for various scenarios and compared against the traditional treatment using a survey with Pan-STARRS-like characteristics.
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Justus, C. G.
2009-01-01
A recent study (Desai, 2008) has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers (Spirit and Opportunity) and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing Desai's reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon entry, descent and landing. Desai's results have raised a question as to whether there is a systemic problem within Mars atmospheric models. Proposal is to compare Mars atmospheric density estimates from Mars atmospheric models to measurements made by Mars Global Surveyor (MGS). Comparison study requires the completion of several tasks that would result in a greater understanding of reasons behind the discrepancy found during recent landings on Mars and possible solutions to this problem.
NASA Astrophysics Data System (ADS)
Karlick, M.; Jiricka, K.
2002-10-01
Using the recent model of the radio zebra fine structures (Ledenev et al. 2001) the magnetic fields, plasma densities, and plasma beta parameters are estimated from high-frequency zebra fine structures. It was found that in the flare radio source of high-frequency (1-2 GHz) zebras the densities and magnetic fields vary in the intervals of (1-4)1010 cm-3 and 40-230 G, respectively. Assuming then the flare temperature as about of 107K, the plasma beta parameters in the zebra radio sources are in the 0.05-0.81 interval. Thus the plasma pressure effects in such radio sources, especially in those with many zebra lines, are not negligible.
NASA Astrophysics Data System (ADS)
Terzi?, Bala; Bassi, Gabriele
2011-07-01
In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzi?, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.
Automated voxelization of 3D atom probe data through kernel density estimation.
Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna
2015-12-01
Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the ? / ?' interface in a Ni-Al-Cr superalloy. PMID:25825028
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Probability density function estimation of laser light scintillation via Bayesian mixtures.
Wang, Eric X; Avramov-Zamurovic, Svetlana; Watkins, Richard J; Nelson, Charles; Malek-Madani, Reza
2014-03-01
A method for probability density function (PDF) estimation using Bayesian mixtures of weighted gamma distributions, called the Dirichlet process gamma mixture model (DP-GaMM), is presented and applied to the analysis of a laser beam in turbulence. The problem is cast in a Bayesian setting, with the mixture model itself treated as random process. A stick-breaking interpretation of the Dirichlet process is employed as the prior distribution over the random mixture model. The number and underlying parameters of the gamma distribution mixture components as well as the associated mixture weights are learned directly from the data during model inference. A hybrid Metropolis-Hastings and Gibbs sampling parameter inference algorithm is developed and presented in its entirety. Results on several sets of controlled data are shown, and comparisons of PDF estimation fidelity are conducted with favorable results. PMID:24690656
NASA Astrophysics Data System (ADS)
Edwards, Matthew C.; Meyer, Renate; Christensen, Nelson
2015-09-01
The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a blocked Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with nonstationary data by breaking longer data streams into smaller and locally stationary components.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Liu, Huaie; Feng, Guohua; Zeng, Weilin; Li, Xiaomei; Bai, Yao; Deng, Shuang; Ruan, Yonghua; Morris, James; Li, Siman; Yang, Zhaoqing; Cui, Liwang
2016-04-01
The conventional method of estimating parasite densities employ an assumption of 8000 white blood cells (WBCs)/μl. However, due to leucopenia in malaria patients, this number appears to overestimate parasite densities. In this study, we assessed the accuracy of parasite density estimated using this assumed WBC count in eastern Myanmar, where Plasmodium vivax has become increasingly prevalent. From 256 patients with uncomplicated P. vivax malaria, we estimated parasite density and counted WBCs by using an automated blood cell counter. It was found that WBC counts were not significantly different between patients of different gender, axillary temperature, and body mass index levels, whereas they were significantly different between age groups of patients and the time points of measurement. The median parasite densities calculated with the actual WBC counts (1903/μl) and the assumed WBC count of 8000/μl (2570/μl) were significantly different. We demonstrated that using the assumed WBC count of 8000 cells/μl to estimate parasite densities of P. vivax malaria patients in this area would lead to an overestimation. For P. vivax patients aged five years and older, an assumed WBC count of 5500/μl best estimated parasite densities. This study provides more realistic assumed WBC counts for estimating parasite densities in P. vivax patients from low-endemicity areas of Southeast Asia. PMID:26802490
Gonzalez, Ruben; Huang, Biao; Lau, Eric
2015-09-01
Principal component analysis has been widely used in the process industries for the purpose of monitoring abnormal behaviour. The process of reducing dimension is obtained through PCA, while T-tests are used to test for abnormality. Some of the main contributions to the success of PCA is its ability to not only detect problems, but to also give some indication as to where these problems are located. However, PCA and the T-test make use of Gaussian assumptions which may not be suitable in process fault detection. A previous modification of this method is the use of independent component analysis (ICA) for dimension reduction combined with kernel density estimation for detecting abnormality; like PCA, this method points out location of the problems based on linear data-driven methods, but without the Gaussian assumptions. Both ICA and PCA, however, suffer from challenges in interpreting results, which can make it difficult to quickly act once a fault has been detected online. This paper proposes the use of Bayesian networks for dimension reduction which allows the use of process knowledge enabling more intelligent dimension reduction and easier interpretation of results. The dimension reduction technique is combined with multivariate kernel density estimation, making this technique effective for non-linear relationships with non-Gaussian variables. The performance of PCA, ICA and Bayesian networks are compared on data from an industrial scale plant. PMID:25930233
Gene Ontology density estimation and discourse analysis for automatic GeneRiF extraction
Gobeill, Julien; Tbahriti, Imad; Ehrler, Frdric; Mottaz, Anas; Veuthey, Anne-Lise; Ruch, Patrick
2008-01-01
Background This paper describes and evaluates a sentence selection engine that extracts a GeneRiF (Gene Reference into Functions) as defined in ENTREZ-Gene based on a MEDLINE record. Inputs for this task include both a gene and a pointer to a MEDLINE reference. In the suggested approach we merge two independent sentence extraction strategies. The first proposed strategy (LASt) uses argumentative features, inspired by discourse-analysis models. The second extraction scheme (GOEx) uses an automatic text categorizer to estimate the density of Gene Ontology categories in every sentence; thus providing a full ranking of all possible candidate GeneRiFs. A combination of the two approaches is proposed, which also aims at reducing the size of the selected segment by filtering out non-content bearing rhetorical phrases. Results Based on the TREC-2003 Genomics collection for GeneRiF identification, the LASt extraction strategy is already competitive (52.78%). When used in a combined approach, the extraction task clearly shows improvement, achieving a Dice score of over 57% (+10%). Conclusions Argumentative representation levels and conceptual density estimation using Gene Ontology contents appear complementary for functional annotation in proteomics. PMID:18426554
Accuracy of estimated geometric parameters of trees depending on the LIDAR data density
NASA Astrophysics Data System (ADS)
Hadas, Edyta; Estornell, Javier
2015-04-01
The estimation of dendrometric variables has become important for spatial planning and agriculture projects. Because classical field measurements are time consuming and inefficient, airborne LiDAR (Light Detection and Ranging) measurements are successfully used in this area. Point clouds acquired for relatively large areas allows to determine the structure of forestry and agriculture areas and geometrical parameters of individual trees. In this study two LiDAR datasets with different densities were used: sparse with average density of 0.5pt/m2 and the dense with density of 4pt/m2. 25 olive trees were selected and field measurements of tree height, crown bottom height, length of crown diameters and tree position were performed. To determine the tree geometric parameters from LiDAR data, two independent strategies were developed that utilize the ArcGIS, ENVI and FUSION software. Strategy a) was based on canopy surface model (CSM) slicing at 0.5m height and in strategy b) minimum bounding polygons as tree crown area were created around detected tree centroid. The individual steps were developed to be applied also in automatic processing. To assess the performance of each strategy with both point clouds, the differences between the measured and estimated geometric parameters of trees were analyzed. As expected, the tree height were underestimated for both strategies (RMSE=0.7m for dense dataset and RMSE=1.5m for sparse) and tree crown height were overestimated (RMSE=0.4m and RMSE=0.7m for dense and sparse dataset respectively). For dense dataset, strategy b) allows to determine more accurate crown diameters (RMSE=0.5m) than strategy a) (RMSE=0.8m), and for sparse dataset, only strategy a) occurs to be relevant (RMSE=1.0m). The accuracy of strategies were also examined for their dependency on tree size. For dense dataset, the larger the tree (height or crown longer diameter), the higher was the error of estimated tree height, and for sparse dataset, the larger the tree, the higher was the error of estimated crown bottom height. Finally, the spatial distribution of points inside the tree crown was analyzed, by creating a normalized tree crown. It confirms a high concentration of LiDAR points inside the central part of a tree.
Krucker, Saem; Raftery, Claire L.; Hudson, Hugh S.
2011-06-10
We report on Transition Region And Coronal Explorer 171 A observations of the GOES X20 class flare on 2001 April 2 that shows EUV flare ribbons with intense diffraction patterns. Between the 11th to 14th order, the diffraction patterns of the compact flare ribbon are dispersed into two sources. The two sources are identified as emission from the Fe IX line at 171.1 A and the combined emission from Fe X lines at 174.5, 175.3, and 177.2 A. The prominent emission of the Fe IX line indicates that the EUV-emitting ribbon has a strong temperature component near the lower end of the 171 A temperature response ({approx}0.6-1.5 MK). Fitting the observation with an isothermal model, the derived temperature is around 0.65 MK. However, the low sensitivity of the 171 A filter to high-temperature plasma does not provide estimates of the emission measure for temperatures above {approx}1.5 MK. Using the derived temperature of 0.65 MK, the observed 171 A flux gives a density of the EUV ribbon of 3 x 10{sup 11} cm{sup -3}. This density is much lower than the density of the hard X-ray producing region ({approx}10{sup 13} to 10{sup 14} cm{sup -3}) suggesting that the EUV sources, though closely related spatially, lie at higher altitudes.
Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar
2014-01-01
Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406
A volumetric method for estimation of breast density on digitized screen-film mammograms.
Pawluczyk, Olga; Augustine, Bindu J; Yaffe, Martin J; Rico, Dan; Yang, Jiwei; Mawdsley, Gordon E; Boyd, Norman F
2003-03-01
A method is described for the quantitative volumetric analysis of the mammographic density (VBD) from digitized screen-film mammograms. The method is based on initial calibration of the imaging system with a tissue-equivalent plastic device and the subsequent correction for variations in exposure factors and film processing characteristics through images of an aluminum step wedge placed adjacent to the breast during imaging. From information about the compressed breast thickness and technique factors used for taking the mammogram as well as the information from the calibration device, VBD is calculated. First, optical sensitometry is used to convert images to Log relative exposure. Second, the images are corrected for x-ray field inhomogeneity using a spherical section PMMA phantom image. The effectiveness of using the aluminum step wedge in tracking down the variations in exposure factors and film processing was tested by taking test images of the calibration device, aluminum step wedge and known density phantoms at various exposure conditions and also at different times over one year. Results obtained on known density phantoms show that VBD can be estimated to within 5% accuracy from the actual value. A first order thickness correction is employed to correct for inaccuracy in the compression thickness indicator of the mammography units. Clinical studies are ongoing to evaluate whether VBD can be a better indicator for breast cancer risk. PMID:12674236
Fiora, Alessandro; Cescatti, Alessandro
2006-09-01
Daily and seasonal patterns in radial distribution of sap flux density were monitored in six trees differing in social position in a mixed coniferous stand dominated by silver fir (Abies alba Miller) and Norway spruce (Picea abies (L.) Karst) in the Alps of northeastern Italy. Radial distribution of sap flux was measured with arrays of 1-cm-long Granier probes. The radial profiles were either Gaussian or decreased monotonically toward the tree center, and seemed to be related to social position and crown distribution of the trees. The ratio between sap flux estimated with the most external sensor and the mean flux, weighted with the corresponding annulus areas, was used as a correction factor (CF) to express diurnal and seasonal radial variation in sap flow. During sunny days, the diurnal radial profile of sap flux changed with time and accumulated photosynthetic active radiation (PAR), with an increasing contribution of sap flux in the inner sapwood during the day. Seasonally, the contribution of sap flux in the inner xylem increased with daily cumulative PAR and the variation of CF was proportional to the tree diameter, ranging from 29% for suppressed trees up to 300% for dominant trees. Two models were developed, relating CF with PAR and tree diameter at breast height (DBH), to correct daily and seasonal estimates of whole-tree and stand sap flow obtained by assuming uniform sap flux density over the sapwood. If the variability in the radial profile of sap flux density was not accounted for, total stand transpiration would be overestimated by 32% during sunny days and 40% for the entire season. PMID:16740497
HIRDLS observations of global gravity wave absolute momentum fluxes: A wavelet based approach
NASA Astrophysics Data System (ADS)
John, Sherine Rachel; Kishore Kumar, Karanam
2016-02-01
Using wavelet technique for detection of height varying vertical and horizontal wavelengths of gravity waves, the absolute values of gravity wave momentum fluxes are estimated from High Resolution Dynamics Limb Sounder (HIRDLS) temperature measurements. Two years of temperature measurements (2005 December-2007 November) from HIRDLS onboard EOS-Aura satellite over the globe are used for this purpose. The least square fitting method is employed to extract the 0-6 zonal wavenumber planetary wave amplitudes, which are removed from the instantaneous temperature profiles to extract gravity wave fields. The vertical and horizontal wavelengths of the prominent waves are computed using wavelet and cross correlation techniques respectively. The absolute momentum fluxes are then estimated using prominent gravity wave perturbations and their vertical and horizontal wavelengths. The momentum fluxes obtained from HIRDLS are compared with the fluxes obtained from ground based Rayleigh LIDAR observations over a low latitude station, Gadanki (13.5°N, 79.2°E) and are found to be in good agreement. After validation, the absolute gravity wave momentum fluxes over the entire globe are estimated. It is found that the winter hemisphere has the maximum momentum flux magnitudes over the high latitudes with a secondary maximum over the summer hemispheric low-latitudes. The significance of the present study lies in introducing the wavelet technique for estimating the height varying vertical and horizontal wavelengths of gravity waves and validating space based momentum flux estimations using ground based lidar observations.
A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti
Villela, Daniel A. M.; Codeo, Claudia T.; Figueiredo, Felipe; Garcia, Gabriela A.; Maciel-de-Freitas, Rafael; Struchiner, Claudio J.
2015-01-01
Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.801.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.35.9). The hierarchical model also performed better than the commonly used Fisher-Fords method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations. PMID:25906323
Seismic Hazard Analysis Using the Adaptive Kernel Density Estimation Technique for Chennai City
NASA Astrophysics Data System (ADS)
Ramanna, C. K.; Dodagoudar, G. R.
2012-01-01
Conventional method of probabilistic seismic hazard analysis (PSHA) using the Cornell-McGuire approach requires identification of homogeneous source zones as the first step. This criterion brings along many issues and, hence, several alternative methods to hazard estimation have come up in the last few years. Methods such as zoneless or zone-free methods, modelling of earth's crust using numerical methods with finite element analysis, have been proposed. Delineating a homogeneous source zone in regions of distributed seismicity and/or diffused seismicity is rather a difficult task. In this study, the zone-free method using the adaptive kernel technique to hazard estimation is explored for regions having distributed and diffused seismicity. Chennai city is in such a region with low to moderate seismicity so it has been used as a case study. The adaptive kernel technique is statistically superior to the fixed kernel technique primarily because the bandwidth of the kernel is varied spatially depending on the clustering or sparseness of the epicentres. Although the fixed kernel technique has proven to work well in general density estimation cases, it fails to perform in the case of multimodal and long tail distributions. In such situations, the adaptive kernel technique serves the purpose and is more relevant in earthquake engineering as the activity rate probability density surface is multimodal in nature. The peak ground acceleration (PGA) obtained from all the three approaches (i.e., the Cornell-McGuire approach, fixed kernel and adaptive kernel techniques) for 10% probability of exceedance in 50 years is around 0.087 g. The uniform hazard spectra (UHS) are also provided for different structural periods.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
NASA Astrophysics Data System (ADS)
Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping
2009-09-01
This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.
Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.
2012-12-01
We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Shimizu, Noritaka; Utsuno, Yutaka; Futamura, Yasunori; Sakurai, Tetsuya; Mizusaki, Takahiro; Otsuka, Takaharu
2016-02-01
We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of J? =2+ and 2- states in 58Ni in a unified manner.
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268
Garde, Ainara; Karlen, Walter; Ansermino, J Mark; Dumont, Guy A
2014-01-01
The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088
Garde, Ainara; Karlen, Walter; Ansermino, J. Mark; Dumont, Guy A.
2014-01-01
The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088
Estimation of effective scatterer size and number density in near-infrared tomography
NASA Astrophysics Data System (ADS)
Wang, Xin
2007-05-01
Light scattering from tissue originates from the fluctuations in intra-cellular and extra-cellular components, so it is possible that macroscopic scattering spectroscopy could be used to quantify sub-microscopic structures. Both electron microscopy (EM) and optical phase contrast microscopy were used to study the origin of scattering from tissue. EM studies indicate that lipid-bound particle sizes appear to be distributed as a monotonic exponential function, with sub-micron structures dominating the distribution. Given assumptions about the index of refraction change, the shape of the scattering spectrum in the near infrared as measured through bulk tissue is consistent with what would be predicted by Mie theory with these particle size histograms. The relative scattering intensity of breast tissue sections (including 10 normal & 23 abnormal) were studied by phase contrast microscopy. Results show that stroma has higher scattering than epithelium tissue, and fat has the lowest values; tumor epithelium has lower scattering than the normal epithelium; stroma associated with tumor has lower scattering than the normal stroma. Mie theory estimation scattering spectra, was used to estimate effective particle size values, and this was applied retrospectively to normal whole breast spectra accumulated in ongoing clinical exams. The effective sizes ranged between 20 and 1400 nm, which are consistent with subcellular organelles and collagen matrix fibrils discussed previously. This estimation method was also applied to images from cancer regions, with results indicating that the effective scatterer sizes of region of interest (ROI) are pretty close to that of the background for both the cancer patients and benign patients; for the effective number density, there is a big difference between the ROI and background for the cancer patients, while for the benign patients, the value of ROI are relatively close to that of the background. Ongoing MRI-guided NIR studies indicated that the fibroglandular tissue had smaller effective scatterer size and larger effective number density than the adipose tissue. The studies in this thesis provide an interpretive approach to estimate average morphological scatter parameters of bulk tissue, through interpretation of diffuse scattering as coming from effective Mie scatterers.
He,P.; Blaskiewicz, M.; Fischer, W.
2009-01-02
In this report we summarize electron-cloud simulations for the RHIC dipole regions at injection and transition to estimate if scrubbing over practical time scales at injection would reduce the electron cloud density at transition to significantly lower values. The lower electron cloud density at transition will allow for an increase in the ion intensity.
Direct learning of sparse changes in Markov networks by density ratio estimation.
Liu, Song; Quinn, John A; Gutmann, Michael U; Suzuki, Taiji; Sugiyama, Masashi
2014-06-01
We propose a new method for detecting changes in Markov network structure between two sets of samples. Instead of naively fitting two Markov network models separately to the two data sets and figuring out their difference, we directly learn the network structure change by estimating the ratio of Markov network models. This density-ratio formulation naturally allows us to introduce sparsity in the network structure change, which highly contributes to enhancing interpretability. Furthermore, computation of the normalization term, a critical bottleneck of the naive approach, can be remarkably mitigated. We also give the dual formulation of the optimization problem, which further reduces the computation cost for large-scale Markov networks. Through experiments, we demonstrate the usefulness of our method. PMID:24684449
NASA Astrophysics Data System (ADS)
Kamousi, Baharan; Nasiri Amini, Ali; He, Bin
2007-06-01
The goal of the present study is to employ the source imaging methods such as cortical current density estimation for the classification of left- and right-hand motor imagery tasks, which may be used for brain-computer interface (BCI) applications. The scalp recorded EEG was first preprocessed by surface Laplacian filtering, time-frequency filtering, noise normalization and independent component analysis. Then the cortical imaging technique was used to solve the EEG inverse problem. Cortical current density distributions of left and right trials were classified from each other by exploiting the concept of Von Neumann entropy. The proposed method was tested on three human subjects (180 trials each) and a maximum accuracy of 91.5% and an average accuracy of 88% were obtained. The present results confirm the hypothesis that source analysis methods may improve accuracy for classification of motor imagery tasks. The present promising results using source analysis for classification of motor imagery enhances our ability of performing source analysis from single trial EEG data recorded on the scalp, and may have applications to improved BCI systems.
Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion
NASA Technical Reports Server (NTRS)
Wilson, L.; Self, S.
1980-01-01
Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.
Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?
NASA Astrophysics Data System (ADS)
Bandyopadhyay, M.; Sudhir, Dass; Chakraborty, A.
2015-04-01
To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance ( 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.
Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?
Bandyopadhyay, M. Sudhir, Dass Chakraborty, A.
2015-04-08
To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.
NASA Astrophysics Data System (ADS)
Vancamberg, Laurence; Geeraert, Nausikaa; Iordache, Razvan; Palma, Giovanni; Klausz, Rémy; Muller, Serge
2011-03-01
Needle insertion planning for digital breast tomosynthesis (DBT) guided biopsy has the potential to improve patient comfort and intervention safety. However, a relevant planning should take into account breast tissue deformation and lesion displacement during the procedure. Deformable models, like finite elements, use the elastic characteristics of the breast to evaluate the deformation of tissue during needle insertion. This paper presents a novel approach to locally estimate the Young's modulus of the breast tissue directly from the DBT data. The method consists in computing the fibroglandular percentage in each of the acquired DBT projection images, then reconstructing the density volume. Finally, this density information is used to compute the mechanical parameters for each finite element of the deformable mesh, obtaining a heterogeneous DBT based breast model. Preliminary experiments were performed to evaluate the relevance of this method for needle path planning in DBT guided biopsy. The results show that the heterogeneous DBT based breast model improves needle insertion simulation accuracy in 71% of the cases, compared to a homogeneous model or a binary fat/fibroglandular tissue model.
Density estimation in aerial images of large crowds for automatic people counting
NASA Astrophysics Data System (ADS)
Herrmann, Christian; Metzler, Juergen
2013-05-01
Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.
Evaluation of a brushing machine for estimating density of spider mites on grape leaves.
Macmillan, Craig D; Costello, Michael J
2015-12-01
Direct visual inspection and enumeration for estimating field population density of economically important arthropods, such as spider mites, provide more information than alternative methods, such as binomial sampling, but is laborious and time consuming. A brushing machine can reduce sampling time and perhaps improve accuracy. Although brushing technology has been investigated and recommended as a useful tool for researchers and integrated pest management practitioners, little work to demonstrate the validity of this technique has been performed since the 1950's. We investigated the brushing machine manufactured by Leedom Enterprises (Mi-Wuk Village, CA, USA) for studies on spider mites. We evaluated (1) the mite recovery efficiency relative to the number of passes of a leaf through the brushes, (2) mite counts as generated by the machine compared to visual counts under a microscope, (3) the lateral distribution of mites on the collection plate and (4) the accuracy and precision of a 10% sub-sample using a double-transect counting grid. We found that about 95% of mites on a leaf were recovered after five passes, and 99% after nine passes, and mite counts from brushing were consistently higher than those from visual inspection. Lateral distribution of mites was not uniform, being highest in concentration at the center and lowest at the periphery. The 10% double-transect pattern did not result in a significant correlation with the total plate count at low mite density, but accuracy and precision improved at medium and high density. We suggest that a more accurate and precise sample may be achieved using a modified pattern which concentrates on the center plus some of the adjacent area. PMID:26459377
Wang, Shuihua; Chen, Mengmeng; Li, Yang; Zhang, Yudong; Han, Liangxiu; Wu, Jane; Du, Sidan
2015-01-01
Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer's disease, Parkinson's diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines. PMID:26692046
Wang, Shuihua; Chen, Mengmeng; Li, Yang; Zhang, Yudong; Han, Liangxiu; Wu, Jane; Du, Sidan
2015-01-01
Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer's disease, Parkinson's diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for "mushroom" spines, 97.6% for "stubby" spines, and 98.6% for "thin" spines. PMID:26692046
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
New Estimates on the EKB Dust Density using the Student Dust Counter
NASA Astrophysics Data System (ADS)
Szalay, J.; Horanyi, M.; Poppe, A. R.
2013-12-01
The Student Dust Counter (SDC) is an impact dust detector on board the New Horizons Mission to Pluto. SDC was designed to resolve the mass of dust grains in the range of 10^-12 < m < 10^-9 g, covering an approximate size range of 0.5-10 um in particle radius. The measurements can be directly compared to the prediction of a grain tracing trajectory model of dust originating from the Edgeworth-Kuiper Belt. SDC's results as well as data taken by the Pioneer 10 dust detector are compared to our model to derive estimates for the mass production rate and the ejecta mass distribution power law exponent. Contrary to previous studies, the assumption that all impacts are generated by grains on circular Keplerian orbits is removed, allowing for a more accurate determination of the EKB mass production rate. With these estimates, the speed and mass distribution of EKB grains entering atmospheres of outer solar system bodies can be calculated. Through December 2013, the New Horizons spacecraft reached approximately 28 AU, enabling SDC to map the dust density distribution of the solar system farther than any previous dust detector.
Dunn, K. L.; Wilson, P. P. H.
2013-07-01
A new Monte Carlo mesh tally based on a Kernel Density Estimator (KDE) approach using integrated particle tracks is presented. We first derive the KDE integral-track estimator and present a brief overview of its implementation as an alternative to the MCNP fmesh tally. To facilitate a valid quantitative comparison between these two tallies for verification purposes, there are two key issues that must be addressed. The first of these issues involves selecting a good data transfer method to convert the nodal-based KDE results into their cell-averaged equivalents (or vice versa with the cell-averaged MCNP results). The second involves choosing an appropriate resolution of the mesh, since if it is too coarse this can introduce significant errors into the reference MCNP solution. After discussing both of these issues in some detail, we present the results of a convergence analysis that shows the KDE integral-track and MCNP fmesh tallies are indeed capable of producing equivalent results for some simple 3D transport problems. In all cases considered, there was clear convergence from the KDE results to the reference MCNP results as the number of particle histories was increased. (authors)
Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study
NASA Technical Reports Server (NTRS)
Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.
2010-01-01
This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.
Adib, Mani; Cretu, Edmond
2013-01-01
We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786
Using kernel density estimation to understand the influence of neighbourhood destinations on BMI
King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M
2016-01-01
Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity. PMID:26883235
On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds
NASA Astrophysics Data System (ADS)
Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua
2015-02-01
We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions for which there can be uniform resolvent estimates for , , where is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from to the spectrum of . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for . Ours is based on the related Hadamard parametrices for , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.
A wavelet based method for modelling seismic waves in strongly heterogeneous media
NASA Astrophysics Data System (ADS)
Kennett, B. L.; Hong, T.-K.
2003-04-01
High accuracy representations of the action of spatial differential operators in elastodynamics can be achieved by using an expansion in terms of wavelets, even in the presence of strong heterogeneity. With a displacment-velocity representation, time-marching solutions of the set of first-order partial differential equations allow the treatment of strong impedance contrasts in heterogeneous media with both stability and accuracy. Numerical simulations of seismic wave propagation in a variety of styles of stochastic heterogeneity allow a detailed investigation of amplitude behaviour and suggest that scattering attenuation is commonly over-estimated by finite-difference techniques. The wavelet method is particularly useful for handling source in the presence of heterogeneity, such as in tectonic environments, including propagtion in fault gouge zones and in subduction zones. The flexibility of the wavelet approach can also be exploited to include various types of dynamic sources, such a fault rupture with a complex time history.
The EM Method in a Probabilistic Wavelet-Based MRI Denoising
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
The EM Method in a Probabilistic Wavelet-Based MRI Denoising.
Martin-Fernandez, Marcos; Villullas, Sergio
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
Cusack, Jeremy J; Swanson, Alexandra; Coulson, Tim; Packer, Craig; Carbone, Chris; Dickman, Amy J; Kosmala, Margaret; Lintott, Chris; Rowcliffe, J Marcus
2015-01-01
The random encounter model (REM) is a novel method for estimating animal density from camera trap data without the need for individual recognition. It has never been used to estimate the density of large carnivore species, despite these being the focus of most camera trap studies worldwide. In this context, we applied the REM to estimate the density of female lions (Panthera leo) from camera traps implemented in Serengeti National Park, Tanzania, comparing estimates to reference values derived from pride census data. More specifically, we attempted to account for bias resulting from non-random camera placement at lion resting sites under isolated trees by comparing estimates derived from night versus day photographs, between dry and wet seasons, and between habitats that differ in their amount of tree cover. Overall, we recorded 169 and 163 independent photographic events of female lions from 7,608 and 12,137 camera trap days carried out in the dry season of 2010 and the wet season of 2011, respectively. Although all REM models considered over-estimated female lion density, models that considered only night-time events resulted in estimates that were much less biased relative to those based on all photographic events. We conclude that restricting REM estimation to periods and habitats in which animal movement is more likely to be random with respect to cameras can help reduce bias in estimates of density for female Serengeti lions. We highlight that accurate REM estimates will nonetheless be dependent on reliable measures of average speed of animal movement and camera detection zone dimensions. © 2015 The Authors. Journal of Wildlife Management published by Wiley Periodicals, Inc. on behalf of The Wildlife Society. PMID:26640297
Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R. )
1994-04-11
A simple method is described that combines conventional threshold-voltage and charge-pumping measurements on [ital n]- and [ital p]-channel metal-oxide-semiconductor (MOS) transistors to estimate radiation-induced oxide-, interface-, and border-trap charge densities. In some devices, densities of border traps (near-interfacial oxide traps that exchange charge with the underlying Si) approach or exceed the density of interface traps, emphasizing the need to distinguish border-trap contributions to MOS radiation response and long-term reliability from interface-trap contributions. Estimates of border-trap charge densities obtained via this new dual-transistor technique agree well with trap densities inferred from 1/[ital f] noise measurements for transistors with varying channel length.
Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Simons, F. J.; Bozdag, E.
2014-12-01
We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.
Rainfall-runoff modeling using conceptual, data driven, and wavelet based computing approach
NASA Astrophysics Data System (ADS)
Nayak, P. C.; Venkatesh, B.; Krishna, B.; Jain, Sharad K.
2013-06-01
The current study demonstrates the potential use of wavelet neural network (WNN) for river flow modeling by developing a rainfall-runoff model for Malaprabha basin in India. Daily data of rainfall, discharge, and evaporation for 21 years (from 1980 to 2000) have been used for modeling. In the modeling original model, inputs have been decomposed by wavelets and decomposed sub-series were taken as input to ANN. Model parameters are calibrated using 17 years of data and rest of the data are used for model validation. Statistical approach has been used to find out the model input. Optimum architectures of the WNN models are selected according to the obtained evaluation criteria in terms of Nash-Sutcliffe efficiency coefficient and root mean squared error. Result of this study has been compared by developing standard neural network model and NAM model. The results of this study indicate that the WNN model performs better compared to an ANN and NAM model in estimating the hydrograph characteristics such as flow duration curve effectively.
Wavelet-based clustering for mixed-effects functional models in high dimension.
Giacofci, M; Lambert-Lacroix, S; Marot, G; Picard, F
2013-03-01
We propose a method for high-dimensional curve clustering in the presence of interindividual variability. Curve clustering has longly been studied especially using splines to account for functional random effects. However, splines are not appropriate when dealing with high-dimensional data and can not be used to model irregular curves such as peak-like data. Our method is based on a wavelet decomposition of the signal for both fixed and random effects. We propose an efficient dimension reduction step based on wavelet thresholding adapted to multiple curves and using an appropriate structure for the random effect variance, we ensure that both fixed and random effects lie in the same functional space even when dealing with irregular functions that belong to Besov spaces. In the wavelet domain our model resumes to a linear mixed-effects model that can be used for a model-based clustering algorithm and for which we develop an EM-algorithm for maximum likelihood estimation. The properties of the overall procedure are validated by an extensive simulation study. Then, we illustrate our method on mass spectrometry data and we propose an original application of functional data analysis on microarray comparative genomic hybridization (CGH) data. Our procedure is available through the R package curvclust which is the first publicly available package that performs curve clustering with random effects in the high dimensional framework (available on the CRAN). PMID:23379722
Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts
NASA Technical Reports Server (NTRS)
Grau, David
2012-01-01
This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.
A wavelet-based spatially adaptive method for mammographic contrast enhancement
NASA Astrophysics Data System (ADS)
Sakellaropoulos, P.; Costaridou, L.; Panayiotakis, G.
2003-03-01
A method aimed at minimizing image noise while optimizing contrast of image features is presented. The method is generic and it is based on local modification of multiscale gradient magnitude values provided by the redundant dyadic wavelet transform. Denoising is accomplished by a spatially adaptive thresholding strategy, taking into account local signal and noise standard deviation. Noise standard deviation is estimated from the background of the mammogram. Contrast enhancement is accomplished by applying a local linear mapping operator on denoised wavelet magnitude values. The operator normalizes local gradient magnitude maxima to the global maximum of the first scale magnitude subimage. Coefficient mapping is controlled by a local gain limit parameter. The processed image is derived by reconstruction from the modified wavelet coefficients. The method is demonstrated with a simulated image with added Gaussian noise, while an initial quantitative performance evaluation using 22 images from the DDSM database was performed. Enhancement was applied globally to each mammogram, using the same local gain limit value. Quantitative contrast and noise metrics were used to evaluate the quality of processed image regions containing verified lesions. Results suggest that the method offers significantly improved performance over conventional and previously reported global wavelet contrast enhancement methods. The average contrast improvement, noise amplification and contrast-to-noise ratio improvement indices were measured as 9.04, 4.86 and 3.04, respectively. In addition, in a pilot preference study, the proposed method demonstrated the highest ranking, among the methods compared. The method was implemented in C++ and integrated into a medical image visualization tool.
Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites
Mitchard, Edward T A; Feldpausch, Ted R; Brienen, Roel J W; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R; Lewis, Simon L; Lloyd, Jon; Quesada, Carlos A; Gloor, Manuel; ter Steege, Hans; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragão, Luiz E O C; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I; Cerón, Carlos E; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R C; Di Fiore, Anthony; Domingues, Tomas F; Erwin, Terry L; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N; Killeen, Tim J; Laurance, William F; Levis, Carolina; Magnusson, William E; Marimon, Beatriz S; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T; Neill, David; Núñez Vargas, Mario P; Palacios, Walter A; Parada, Alexander; Pardo Molina, Guido; Peña-Claros, Marielos; Pitman, Nigel; Peres, Carlos A; Poorter, Lourens; Prieto, Adriana; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H; Rudas, Agustin; Salomão, Rafael P; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F; Steininger, Marc K; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R; van der Heijden, Geertje M F; Vieira, Ima C G; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A; Wang, Ophelia; Zartman, Charles E; Malhi, Yadvinder; Phillips, Oliver L
2014-01-01
Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass directly, and thus may miss significant spatial variations in forest structure. We test the stated accuracy of pantropical carbon maps using a large independent field dataset. Location Tropical forests of the Amazon basin. The permanent archive of the field plot data can be accessed at: http://dx.doi.org/10.5521/FORESTPLOTS.NET/2014_1 Methods Two recent pantropical RS maps of vegetation carbon are compared to a unique ground-plot dataset, involving tree measurements in 413 large inventory plots located in nine countries. The RS maps were compared directly to field plots, and kriging of the field data was used to allow area-based comparisons. Results The two RS carbon maps fail to capture the main gradient in Amazon forest carbon detected using 413 ground plots, from the densely wooded tall forests of the north-east, to the light-wooded, shorter forests of the south-west. The differences between plots and RS maps far exceed the uncertainties given in these studies, with whole regions over- or under-estimated by > 25%, whereas regional uncertainties for the maps were reported to be < 5%. Main conclusions Pantropical biomass maps are widely used by governments and by projects aiming to reduce deforestation using carbon offsets, but may have significant regional biases. Carbon-mapping techniques must be revised to account for the known ecological variation in tree wood density and allometry to create maps suitable for carbon accounting. The use of single relationships between tree canopy height and above-ground biomass inevitably yields large, spatially correlated errors. This presents a significant challenge to both the forest conservation and remote sensing communities, because neither wood density nor species assemblages can be reliably mapped from space. PMID:26430387
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie
2015-09-01
Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609
Halama, Niels; Zoernig, Inka; Spille, Anna; Westphal, Kathi; Schirmacher, Peter
2009-01-01
Background Determining the correct number of positive immune cells in immunohistological sections of colorectal cancer and other tumor entities is emerging as an important clinical predictor and therapy selector for an individual patient. This task is usually obstructed by cell conglomerates of various sizes. We here show that at least in colorectal cancer the inclusion of immune cell conglomerates is indispensable for estimating reliable patient cell counts. Integrating virtual microscopy and image processing principally allows the high-throughput evaluation of complete tissue slides. Methodology/Principal findings For such large-scale systems we demonstrate a robust quantitative image processing algorithm for the reproducible quantification of cell conglomerates on CD3 positive T cells in colorectal cancer. While isolated cells (28 to 80 m2) are counted directly, the number of cells contained in a conglomerate is estimated by dividing the area of the conglomerate in thin tissues sections (?6 m) by the median area covered by an isolated T cell which we determined as 58 m2. We applied our algorithm to large numbers of CD3 positive T cell conglomerates and compared the results to cell counts obtained manually by two independent observers. While especially for high cell counts, the manual counting showed a deviation of up to 400 cells/mm2 (41% variation), algorithm-determined T cell numbers generally lay in between the manually observed cell numbers but with perfect reproducibility. Conclusion In summary, we recommend our approach as an objective and robust strategy for quantifying immune cell densities in immunohistological sections which can be directly implemented into automated full slide image processing systems. PMID:19924291
Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.
2002-01-01
The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.
Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.
1982-12-16
Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C
2011-09-01
An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R=0.980 for a training set of 317 molecules, and R=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. PMID:21798775
NASA Astrophysics Data System (ADS)
Chen, W.; Shao, Z.; Tiong, L. K.
2015-11-01
Drought caused the most widespread damage in China, making up over 50 % of the total affected area nationwide in recent decades. In the paper, a Standardized Precipitation Index-based (SPI-based) drought risk study is conducted using historical rainfall data of 19 weather stations in Shandong province, China. Kernel density based method is adopted to carry out the risk analysis. Comparison between the bivariate Gaussian kernel density estimation (GKDE) and diffusion kernel density estimation (DKDE) are carried out to analyze the effect of drought intensity and drought duration. The results show that DKDE is relatively more accurate without boundary-leakage. Combined with the GIS technique, the drought risk is presented which reveals the spatial and temporal variation of agricultural droughts for corn in Shandong. The estimation provides a different way to study the occurrence frequency and severity of drought risk from multiple perspectives.
Hearn, Andrew J; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T B; Macdonald, David W
2016-01-01
The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219
Hearn, Andrew J.; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T. B.; Macdonald, David W.
2016-01-01
The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219
Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.
2004-01-01
Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.
Trolle, M.; Kery, M.
2003-01-01
Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).
Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.
Shitong Wang; Jun Wang; Fu-Lai Chung
2014-01-01
Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their nave implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets. PMID:23797315
Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.
2005-01-01
The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.
JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...
Estimate of the precision of an atmosphere density model for ballistic calculations GOST 22721-77.
NASA Astrophysics Data System (ADS)
Nazarenko, A. I.; Andreev, V. E.; Varnakova, S. N.; Gorokhov, Yu. P.; Gukina, R. V.; Klimenko, A. G.; Markova, L. G.
The technique and error evaluation conditions of an atmosphere density model based on AES drag data are briefly described. Statistical characteristics of the GOST 22721-77 model errors for the 200 - 600 km altitude range are obtained and regularities of density space-time variations are also discovered.
Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco
2016-01-01
Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, 'canonical' density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891
Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco
2016-01-01
Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, ‘canonical’ density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891
2012-01-01
Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG) more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1) the area between QRS offset and T-peak points, 2) the normalized and signed sum from QRS offset to effective zero voltage point, and 3) the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE) and support vector machine (SVM) methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical values of the parameters to be supplied in advance. In the case of the SVM classifier, one has to select a single parameter. PMID:22703641
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
A field comparison of nested grid and trapping web density estimators
Jett, D.A.; Nichols, J.D.
1987-01-01
The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.
NASA Astrophysics Data System (ADS)
Kumagai, Osamu; Mukaigawa, Seiji; Takaki, Koichi; Fujiwara, Tamiya; Yukimura, Ken; Ego, Kenichi
Ion extraction from a magnetically driven shunting arc plasma and the plasma density at sheath boundary are described in this paper. The plasma density is obtained using current and voltage waveforms of a target which is immersed in the plasma to extract the carbon ions. A 40 mm length carbon rod which has 2 mm in diameter is employed as solid state plasma sources. A large current more than 1 kamps is supplied from charged-up 20 ?F capacitor to heat up the carbon rod and to vaporize the rod matirials. The shunting arc plasma generated along the rod surface is driven with Lorenz force and is accelerated tward a muzzle of a plasma launcher which consists of pair of 100 mm-length carbon plates. The 64 mm diameter brass disc is placed at 100 mm apart from carbon rod and is used as the target. The 10 ?s width negative pulse bias voltage is applied to the target for the extraction of the carbon ions from the shunting arc plasma. The ion density near the sheath boundary around the target is obtained using two procedures; utilization of the stationary target current to determine the ion current and/or utilization of the target bias voltage waveforms. The ion density is obtained to be 1.1 1015 m-3 at 100 ?s after the arc ignition using the target current. The density is larger than the density of 5.1 1014 m-3 obtained using the waveform of the target voltage. The plasma density is also predicted using a Bohm equation and a momentum conservation equation. The plasma density is 1.6 1016 m-3 at 100 ?s after the arc ignition. The density decreases with increasing the time from arc ignition.
Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range
NASA Technical Reports Server (NTRS)
Krupenio, N. N.
1977-01-01
The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.
Estimates of Dust and 13CO Radial Volume Density Profiles in Nearby Molecular Clouds
NASA Astrophysics Data System (ADS)
Krco, Marko
2012-05-01
The relation between dust and gas bears significance in many questions relating to the chemistry of the ISM and molecular clouds in particular. A perennial problem in understanding the relation is that the bulk of our measurements come in the form of integrated intensity or column density maps, yet most chemical processes depend on volume densities of the species in question. Radial volume density profiles of dust and 13CO are obtained, within certain limitations, for several nearby molecular clouds. A new, geometry-independent, technique is employed to obtain the radial volume density profiles. This technique provides several advantages over previous methods. A direct comparison between stellar reddening due to dust and 13CO emission is made throughout each cloud. Implications for temperature variations and the dust to gas ratio throughout the interior of molecular clouds are discussed as well as limitations on the presence of 13CO freezing onto dust grains.
Childs, J E; Robinson, L E; Sadek, R; Madden, A; Miranda, M E; Miranda, N L
1998-01-01
We estimated the population density of dogs by distance sampling and assessed the potential utility of two marking methods for capture-mark-recapture applications following a mass canine rabies-vaccination campaign in Sorsogon Province, the Republic of the Philippines. Thirty villages selected to assess vaccine coverage and for dog surveys were visited 1 to 11 days after the vaccinating team. Measurements of the distance of dogs or groups of dogs from transect lines were obtained in 1088 instances (N = 1278 dogs; mean group size = 1.2). Various functions modelling the probability of detection were fitted to a truncated distribution of distances of dogs from transect lines. A hazard rate model provided the best fit and an overall estimate of dog-population density of 468/km2 (95% confidence interval, 359 to 611). At vaccination, most dogs were marked with either a paint stick or a black plastic collar. Overall, 34.8% of 2167 and 28.5% of 2115 dogs could be accurately identified as wearing a collar or showing a paint mark; 49.1% of the dogs had either mark. Increasing time interval between vaccination-team visit and dog survey and increasing distance from transect line were inversely associated with the probability of observing a paint mark. Probability of observing a collar was positively associated with increasing estimated density of the dog population in a given village and with animals not associated with a house. The data indicate that distance sampling is a relatively simple and adaptable method for estimating dog-population density and is not prone to problems associated with meeting some model assumptions inherent to mark-recapture estimators. PMID:9500175
NASA Technical Reports Server (NTRS)
Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.
1995-01-01
To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.
Singer, D.A.; Kouda, R.
2011-01-01
Empirical evidence indicates that processes affecting number and quantity of resources in geologic settings are very general across deposit types. Sizes of permissive tracts that geologically could contain the deposits are excellent predictors of numbers of deposits. In addition, total ore tonnage of mineral deposits of a particular type in a tract is proportional to the type's median tonnage in a tract. Regressions using size of permissive tracts and median tonnage allow estimation of number of deposits and of total tonnage of mineralization. These powerful estimators, based on 10 different deposit types from 109 permissive worldwide control tracts, generalize across deposit types. Estimates of number of deposits and of total tonnage of mineral deposits are made by regressing permissive area, and mean (in logs) tons in deposits of the type, against number of deposits and total tonnage of deposits in the tract for the 50th percentile estimates. The regression equations (R2=0.91 and 0.95) can be used for all deposit types just by inserting logarithmic values of permissive area in square kilometers, and mean tons in deposits in millions of metric tons. The regression equations provide estimates at the 50th percentile, and other equations are provided for 90% confidence limits for lower estimates and 10% confidence limits for upper estimates of number of deposits and total tonnage. Equations for these percentile estimates along with expected value estimates are presented here along with comparisons with independent expert estimates. Also provided are the equations for correcting for the known well-explored deposits in a tract. These deposit-density models require internally consistent grade and tonnage models and delineations for arriving at unbiased estimates. ?? 2011 International Association for Mathematical Geology (outside the USA).
Variability of footprint ridge density and its use in estimation of sex in forensic examinations.
Krishan, Kewal; Kanchan, Tanuj; Pathania, Annu; Sharma, Ruchika; DiMaggio, John A
2015-10-01
The present study deals with a comparatively new biometric parameter of footprints called footprint ridge density. The study attempts to evaluate sex-dependent variations in ridge density in different areas of the footprint and its usefulness in discriminating sex in the young adult population of north India. The sample for the study consisted of 160 young adults (121 females) from north India. The left and right footprints were taken from each subject according to the standard procedures. The footprints were analysed using a 5?mm??5?mm square and the ridge density was calculated in four different well-defined areas of the footprints. These were: F1 - the great toe on its proximal and medial side; F2 - the medial ball of the footprint, below the triradius (the triradius is a Y-shaped group of ridges on finger balls, palms and soles which forms the basis of ridge counting in identification); F3 - the lateral ball of the footprint, towards the most lateral part; and F4 - the heel in its central part where the maximum breadth at heel is cut by a perpendicular line drawn from the most posterior point on heel. This value represents the number of ridges in a 25?mm(2) area and reflects the ridge density value. Ridge densities analysed on different areas of footprints were compared with each other using the Friedman test for related samples. The total footprint ridge density was calculated as the sum of the ridge density in the four areas of footprints included in the study (F1?+?F2?+?F3?+?F4). The results show that the mean footprint ridge density was higher in females than males in all the designated areas of the footprints. The sex differences in footprint ridge density were observed to be statistically significant in the analysed areas of the footprint, except for the heel region of the left footprint. The total footprint ridge density was also observed to be significantly higher among females than males. A statistically significant correlation is shown in the ridge densities among most areas of both left and right sides. Based on receiver operating characteristic (ROC) curve analysis, the sexing potential of footprint ridge density was observed to be considerably higher on the right side. The sexing potential for the four areas ranged between 69.2% and 85.3% on the right side, and between 59.2% and 69.6% on the left side. ROC analysis of the total footprint ridge density shows that the sexing potential of the right and left footprint was 91.5% and 77.7% respectively. The study concludes that footprint ridge density can be utilised in the determination of sex as a supportive parameter. The findings of the study should be utilised only on the north Indian population and may not be internationally generalisable. PMID:25413487
Jaff, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Ra, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A
2010-04-01
Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation. PMID:19775273
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.
2011-12-01
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
Arora, Bhavna; Mohanty, Binayak P; McGuire, Jennifer T
2011-04-01
Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term"macropore density" is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard
1994-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
NASA Astrophysics Data System (ADS)
Das, Debanjan; Shiladitya, Kumar; Biswas, Karabi; Dutta, Pranab Kumar; Parekh, Aditya; Mandal, Mahitosh; Das, Soumen
2015-12-01
The paper presents a study to differentiate normal and cancerous cells using label-free bioimpedance signal measured by electric cell-substrate impedance sensing. The real-time-measured bioimpedance data of human breast cancer cells and human epithelial normal cells employs fluctuations of impedance value due to cellular micromotions resulting from dynamic structural rearrangement of membrane protrusions under nonagitated condition. Here, a wavelet-based multiscale quantitative analysis technique has been applied to analyze the fluctuations in bioimpedance. The study demonstrates a method to classify cancerous and normal cells from the signature of their impedance fluctuations. The fluctuations associated with cellular micromotion are quantified in terms of cellular energy, cellular power dissipation, and cellular moments. The cellular energy and power dissipation are found higher for cancerous cells associated with higher micromotions in cancer cells. The initial study suggests that proposed wavelet-based quantitative technique promises to be an effective method to analyze real-time bioimpedance signal for distinguishing cancer and normal cells.
NASA Astrophysics Data System (ADS)
Joglekar, D. M.; Mitra, M.
2015-12-01
The present investigation outlines a method based on the wavelet transform to analyze the vibration response of discrete piecewise linear oscillators, representative of beams with breathing cracks. The displacement and force variables in the governing differential equation are approximated using Daubechies compactly supported wavelets. An iterative scheme is developed to arrive at the optimum transform coefficients, which are back-transformed to obtain the time-domain response. A time-integration scheme, solving a linear complementarity problem at every time step, is devised to validate the proposed wavelet-based method. Applicability of the proposed solution technique is demonstrated by considering several test cases involving a cracked cantilever beam modeled as a bilinear SDOF system subjected to a harmonic excitation. In particular, the presence of higher-order harmonics, originating from the piecewise linear behavior, is confirmed in all the test cases. Parametric study involving the variations in the crack depth, and crack location is performed to bring out their effect on the relative strengths of higher-order harmonics. Versatility of the method is demonstrated by considering the cases such as mixed-frequency excitation and an MDOF oscillator with multiple bilinear springs. In addition to purporting the wavelet-based method as a viable alternative to analyze the response of piecewise linear oscillators, the proposed method can be easily extended to solve inverse problems unlike the other direct time integration schemes.
NASA Astrophysics Data System (ADS)
Shangguan, Pengcheng; Al-Qadi, Imad L.; Lahouar, Samer
2014-08-01
This paper presents the application of artificial neural network (ANN) based pattern recognition to extract the density information of asphalt pavement from simulated ground penetrating radar (GPR) signals. This study is part of research efforts into the application of GPR to monitor asphalt pavement density during compaction. The main challenge is to eliminate the effect of roller-sprayed water on GPR signals during compaction and to extract density information accurately. A calibration of the excitation function was conducted to provide an accurate match between the simulated signal and the real signal. A modified electromagnetic mixing model was then used to calculate the dielectric constant of asphalt mixture with water. A large database of GPR responses was generated from pavement models having different air void contents and various surface moisture contents using finite-difference time-domain simulation. Feature extraction was performed to extract density-related features from the simulated GPR responses. Air void contents were divided into five classes representing different compaction statuses. An ANN-based pattern recognition system was trained using the extracted features as inputs and air void content classes as target outputs. Accuracy of the system was tested using test data set. Classification of air void contents using the developed algorithm is found to be highly accurate, which indicates effectiveness of this method to predict asphalt concrete density.
Lapuerta, Magn; Rodrguez-Fernndez, Jos; Armas, Octavio
2010-09-01
Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853
Amundsen, L.; Reitan, A.
1995-09-01
The authors propose a new method for inferring the density and P- and S-wave velocities at the sea bottom. The technique is based on estimating these parameters from the acoustic/elastic reflection coefficient calculated from point-source measurements of pressure and vertical component of particle velocity recorded at the sea floor. The data may be collected either by using a fixed source and two-component receiver and a moving source. By spectral division of the two-component recordings transformed to the frequency-radial wavenumber domain, they obtain an estimate of the slowness-dependant reflection coefficient, containing AVO information, which is inverted in a least-squares sense with respect to wave velocities and density. In the following, the theoretical framework with related inversion procedure is outlined briefly. The viability of the inversion method is demonstrated by means of synthetic data.
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
2009-03-05
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
Arora, Bhavna; Mohanty, Binayak P.; McGuire, Jennifer T.
2013-01-01
Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term“macropore density” is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165
Bhattacharya, Abhishek; Dunson, David B
2012-08-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.
2005-01-01
Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.
NASA Astrophysics Data System (ADS)
Mikuka, J.; Maruiak, I.; Zahorec, P.; Pap?o, J.; Pasteka, R.; Bielik, M.
2014-12-01
It has been well known that free-air anomalies and gravitational effects of the topographic masses are mutually proportional, at least in general. However, it is rather intriguing that this feature is more remarkable in elevated mountainous areas than in lowlands or flat regions, as we demonstrate on practical examples. Further, since the times of Pierre Bouguer we know that gravitational effect of the topographic masses is station-height-dependent. In our presentation we show that the respective contributions to this height dependence, although they are nonzero, are less significant in the cases of both the nearest masses and the more remote ones while the contribution of the masses within hundreds and thousands of meters from the gravity station is dominant. We also illustrate that, surprisingly, gravitational effects of the non-near topographic masses can be apparently independent on their respective volumes, while their gravitational effects are still well proportional to the gravity station heights. On the other hand, based on interpretational reasons, Bouguer anomaly should not correlate very much with the heights of the measuring points or, more specifically, with the gravitational effect of the topographic masses. Standard practice is to estimate a suitable (uniform) reduction or correction density within the study area in order to minimize such an undesired correlation and, vice versa, the minimum correlation is often utilized as a criteria for estimating such density. Our main objective is to point out, from the aspect of the correction density estimations, that the contributions of the topographic masses should be viewed alternatively, depending on the particular distances of the respective portions of those masses from the gravity station. We have tested majority of the existing methods of such density estimation and developed a new one which takes the facts mentioned above into consideration. This work was supported by the Slovak Research and Development Agency under the contracts APVV-0827-12 and APVV-0194-10.
Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David
2012-12-01
The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368
Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California
Technology Transfer Automated Retrieval System (TEKTRAN)
The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...
NASA Astrophysics Data System (ADS)
Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.
2016-03-01
In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an important factor to govern this trend. It is further observed that the CMD has good correlation with O3, SO2 and ambient RH, suggesting that possibly sulfate secondary materials have a substantial contribution in particle effective density. This approach can be useful for real-time measurement of effective density of both laboratory-generated and ambient aerosol particles, which is very important for studying the physico-chemical properties of particles.
NASA Astrophysics Data System (ADS)
Rajamane, N. P.; Nataraja, M. C.; Jeyalakshmi, R.; Nithiyanantham, S.
2016-02-01
Geopolymer concrete is zero-Portland cement concrete containing alumino-silicate based inorganic polymer as binder. The polymer is obtained by chemical activation of alumina and silica bearing materials, blast furnace slag by highly alkaline solutions such as hydroxide and silicates of alkali metals. Sodium hydroxide solutions of different concentrations are commonly used in making GPC mixes. Often, it is seen that sodium hydroxide solution of very high concentration is diluted with water to obtain SHS of desired concentration. While doing so it was observed that the solute particles of NaOH in SHS tend to occupy lower volumes as the degree of dilution increases. This aspect is discussed in this paper. The observed phenomenon needs to be understood while formulating the GPC mixes since this influences considerably the relationship between concentration and density of SHS. This paper suggests an empirical formula to relate density of SHS directly to concentration expressed by w/w.
Estimating the effective density of engineered nanomaterials for in vitro dosimetry
NASA Astrophysics Data System (ADS)
Deloid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip
2014-03-01
The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro.
Dafflon, Baptisite; Barrash, Warren; Cardiff, Michael A.; Johnson, Timothy C.
2011-12-15
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variabledensity transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
Comparison of volumetric breast density estimations from mammography and thorax CT
NASA Astrophysics Data System (ADS)
Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.
2014-08-01
Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.
Comparison of volumetric breast density estimations from mammography and thorax CT.
Geeraert, N; Klausz, R; Cockmartin, L; Muller, S; Bosmans, H; Bloch, I
2014-08-01
Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations. PMID:25049219
Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly
Technology Transfer Automated Retrieval System (TEKTRAN)
Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...
Numerical estimation of bone density and elastic constants distribution in a human mandible.
Reina, J M; García-Aznar, J M; Domínguez, J; Doblaré, M
2007-01-01
In this paper, we try to predict the distribution of bone density and elastic constants in a human mandible, based on the stress level produced by mastication loads using a mathematical model of bone remodelling. These magnitudes are needed to build finite element models for the simulation of the mandible mechanical behavior. Such a model is intended for use in future studies of the stability of implant-supported dental prostheses. Various models of internal bone remodelling, both phenomenological and more recently mechanobiological, have been developed to determine the relation between bone density and the stress level that bone supports. Among the phenomenological models, there are only a few that are also able to reproduce the level of anisotropy. These latter have been successfully applied to long bones, primarily the femur. One of these models is here applied to the human mandible, whose corpus behaves as a long bone. The results of bone density distribution and level of anisotropy in different parts of the mandible have been compared with various clinical studies, with a reasonable level of agreement. PMID:16687149
New treatments of density fluctuations and recurrence times for re-estimating Zermelo’s paradox
NASA Astrophysics Data System (ADS)
Michel, Denis
What is the probability that all the gas in a box accumulates in the same half of this box? Though amusing, this question underlies the fundamental problem of density fluctuations at equilibrium, which has profound implementations in many physical fields. The currently accepted solutions are derived from the studies of Brownian motion by Smoluchowski, but they are not appropriate for the directly colliding particles of gases. Two alternative theories are proposed here using self-regulatory Bernoulli distributions, which incorporate roles for crowding and pressure in counteracting density fluctuations. A quantum of space is first introduced to develop a mechanism of matter congestion holding for high densities. In a second mechanism valid in ordinary conditions, the influence of local pressure on the location of every particle is examined using classical laws of ideal gases. This approach reveals that a negative feedback results from the reciprocal influences between individual particles and the population of particles, which strongly reduces the probability of atypical microstates. Finally, a thermodynamic quantum of time is defined to compare the recurrence times of improbable macrostates predicted through these different approaches.
Breast segmentation and density estimation in breast MRI: a fully automatic framework.
Gubern-Mrida, Albert; Kallenberg, Michiel; Mann, Ritse M; Mart, Robert; Karssemeijer, Nico
2015-01-01
Breast density measurement is an important aspect in breast cancer diagnosis as dense tissue has been related to the risk of breast cancer development. The purpose of this study is to develop a method to automatically compute breast density in breast MRI. The framework is a combination of image processing techniques to segment breast and fibroglandular tissue. Intra- and interpatient signal intensity variability is initially corrected. The breast is segmented by automatically detecting body-breast and air-breast surfaces. Subsequently, fibroglandular tissue is segmented in the breast area using expectation-maximization. A dataset of 50 cases with manual segmentations was used for evaluation. Dice similarity coefficient (DSC), total overlap, false negative fraction (FNF), and false positive fraction (FPF) are used to report similarity between automatic and manual segmentations. For breast segmentation, the proposed approach obtained DSC, total overlap, FNF, and FPF values of 0.94, 0.96, 0.04, and 0.07, respectively. For fibroglandular tissue segmentation, we obtained DSC, total overlap, FNF, and FPF values of 0.80, 0.85, 0.15, and 0.22, respectively. The method is relevant for researchers investigating breast density as a risk factor for breast cancer and all the described steps can be also applied in computer aided diagnosis systems. PMID:25561456
NASA Astrophysics Data System (ADS)
Sarangi, B.; Aggarwal, S. G.; Sinha, D.; Gupta, P. K.
2015-12-01
In this work, we have used scanning mobility particle sizer (SMPS) and quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyzer (DMA), where size segregation was done based on particle electrical mobility. At the downstream of DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas other one is sent to QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of SMPS and mass concentration data obtained from QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10 to 478 nm), i.e. AS, SC and AN is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Among individual uncertainty components, repeatability of particle mass obtained by QCM, QCM crystal frequency, CPC counting efficiency, and equivalence of CPC and QCM derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of winter period in New Delhi is measured to be 1.28 ± 0.12 g cm-3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an important factor to govern this trend. It is further observed that the CMD has good correlation with O3, SO2 and ambient RH, suggesting that possibly sulfate secondary materials have substantial contribution in particle effective density. This approach can be useful for real-time measurement of effective density of both laboratory generated and ambient aerosol particles, which is very important for studying the physico-chemical property of particles.
NASA Astrophysics Data System (ADS)
Mutanga, Onisimo; Adam, Elhadi; Cho, Moses Azong
2012-08-01
The saturation problem associated with the use of NDVI for biomass estimation in high canopy density vegetation is a well known phenomenon. Recent field spectroscopy experiments have shown that narrow band vegetation indices computed from the red edge and the NIR shoulder can improve the estimation of biomass in such situations. However, the wide scale unavailability of high spectral resolution satellite sensors with red edge bands has not seen the up-scaling of these techniques to spaceborne remote sensing of high density biomass. This paper explored the possibility of estimate biomass in a densely vegetated wetland area using normalized difference vegetation index (NDVI) computed from WorldView-2 imagery, which contains a red edge band centred at 725 nm. NDVI was calculated from all possible two band combinations of WorldView-2. Subsequently, we utilized the random forest regression algorithm as variable selection and a regression method for predicting wetland biomass. The performance of random forest regression in predicting biomass was then compared against the widely used stepwise multiple linear regression. Predicting biomass on an independent test data set using the random forest algorithm and 3 NDVIs computed from the red edge and NIR bands yielded a root mean square error of prediction (RMSEP) of 0.441 kg/m2 (12.9% of observed mean biomass) as compared to the stepwise multiple linear regression that produced an RMSEP of 0.5465 kg/m2 (15.9% of observed mean biomass). The results demonstrate the utility of WorldView-2 imagery and random forest regression in estimating and ultimately mapping vegetation biomass at high density - a previously challenging task with broad band satellite sensors.
NASA Astrophysics Data System (ADS)
Baylor, R. N.; Cassak, P. A.; Christe, S.; Hannah, I. G.; Krucker, Sm; Mullan, D. J.; Shay, M. A.; Hudson, H. S.; Lin, R. P.
2011-07-01
We use more than 4500 microflares from the RHESSI microflare data set to estimate electron densities and volumetric filling factors of microflare loops using a cooling time analysis. We show that if the filling factor is assumed to be unity, the calculated conductive cooling times are much shorter than the observed flare decay times, which in turn are much shorter than the calculated radiative cooling times. This is likely unphysical, but the contradiction can be resolved by assuming that the radiative and conductive cooling times are comparable, which is valid when the flare loop temperature is a maximum and when external heating can be ignored. We find that resultant radiative and conductive cooling times are comparable to observed decay times, which has been used as an assumption in some previous studies. The inferred electron densities have a mean value of 1011.6 cm-3 and filling factors have a mean of 10-3.7. The filling factors are lower and densities are higher than previous estimates for large flares, but are similar to those found for two microflares by Moore et al.
Estimating the effective density of engineered nanomaterials for in vitro dosimetry
DeLoid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Wang, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip
2014-01-01
The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by bench-top centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174
NASA Technical Reports Server (NTRS)
Aase, J. K.; Millard, J. P.; Siddoway, F. H. (Principal Investigator)
1982-01-01
Radiance measurements from handheld (Exotech 100-A) and air-borne (Daedalus DEI 1260) radiometers were related to wheat (Triticum aestivum L.) stand densities (simulated winter wheat winterkill) and to grain yield for a field located 11 km northwest of Sidney, Montana, on a Williams loam soil (fine-loamy, mixed Typic Argiborolls) where a semidwarf hard red spring wheat cultivar was needed to stand. Radiances were measured with the handheld radiometer on clear mornings throughout the growing season. Aircraft overflight measurements were made at the end of tillering and during the early stem extension period, and the mid-heading period. The IR/red ratio and normalized difference vegetation index were used in the analysis. The aircraft measurements corroborated the ground measurements inasmuch as wheat stand densities were detected and could be evaluated at an early enough growth stage to make management decision. The aircraft measurements also corroborated handheld measurements when related to yield prediction. The IR/red ratio, although there was some growth stage dependency, related well to yield when measured from just past tillering until about the watery-ripe stage.
Systematic Parameter Estimation of a Density-Dependent Groundwater-Flow and Solute-Transport Model
NASA Astrophysics Data System (ADS)
Stanko, Z.; Nishikawa, T.; Traum, J. A.
2013-12-01
A SEAWAT-based, flow and transport model of seawater-intrusion was developed for the Santa Barbara groundwater basin in southern California that utilizes dual-domain porosity. Model calibration can be difficult when simulating flow and transport in large-scale hydrologic systems with extensive heterogeneity. To facilitate calibration, the hydrogeologic properties in this model are based on the fraction of coarse and fine-grained sediment interpolated from drillers' logs. This approach prevents over-parameterization by assigning one set of parameters to coarse material and another set to fine material. Estimated parameters include boundary conditions (such as areal recharge and surface-water seepage), hydraulic conductivities, dispersivities, and mass-transfer rate. As a result, the model has 44 parameters that were estimated by using the parameter-estimation software PEST, which uses the Gauss-Marquardt-Levenberg algorithm, along with various features such as singular value decomposition to improve calibration efficiency. The model is calibrated by using 36 years of observed water-level and chloride-concentration measurements, as well as first-order changes in head and concentration. Prior information on hydraulic properties is also provided to PEST as additional observations. The calibration objective is to minimize the squared sum of weighted residuals. In addition, observation sensitivities are investigated to effectively calibrate the model. An iterative parameter-estimation procedure is used to dynamically calibrate steady state and transient simulation models. The resulting head and concentration states from the steady-state-model provide the initial conditions for the transient model. The transient calibration provides updated parameter values for the next steady-state simulation. This process repeats until a reasonable fit is obtained. Preliminary results from the systematic calibration process indicate that tuning PEST by using a set of synthesized observations generated from model output reduces execution times significantly. Parameter sensitivity analyses indicate that both simulated heads and chloride concentrations are sensitive to the ocean boundary conductance parameter. Conversely, simulated heads are sensitive to some parameters, such as specific fault conductances, but chloride concentrations are insensitive to the same parameters. Heads are specifically found to be insensitive to mobile domain texture but sensitive to hydraulic conductivity and specific storage. The chloride concentrations are insensitive to some hydraulic conductivity and fault parameters but sensitive to mass transfer rate and longitudinal dispersivity. Future work includes investigating the effects of parameter and texture characterization uncertainties on seawater intrusion simulations.
Individual movements and population density estimates for moray eels on a Caribbean coral reef
NASA Astrophysics Data System (ADS)
Abrams, R. W.; Schein, M. W.
1986-12-01
Observations of moray eel (Muraenidae) distribution made on a Caribbean coral reef are discussed in the context of long term population trends. Observations of eel distribution made using SCUBA during 1978, 1979 1980, and 1984 are compared and related to the occurrence of a hurricane in 1979. An estimate of the mean standing stock of moray eels is presented. The degree of site attachment is discussed for spotted morays ( Gymnothorax moringa) and goldentail morays ( Muraena miliaris). The repeated non-aggressive association of moray eels with large aggregations of potential prey fishes is detailed.
Sperling, Or; Shapira, Or; Cohen, Shabtai; Tripler, Effi; Schwartz, Amnon; Lazarovitch, Naftali
2012-09-01
In a world of diminishing water reservoirs and a rising demand for food, the practice and development of water stress indicators and sensors are in rapid progress. The heat dissipation method, originally established by Granier, is herein applied and modified to enable sap flow measurements in date palm trees in the southern Arava desert of Israel. A long and tough sensor was constructed to withstand insertion into the date palm's hard exterior stem. This stem is wide and fibrous, surrounded by an even tougher external non-conducting layer of dead leaf bases. Furthermore, being a monocot species, water flow does not necessarily occur through the outer part of the palm's stem, as in most trees. Therefore, it is highly important to investigate the variations of the sap flux densities and determine the preferable location for sap flow sensing within the stem. Once installed into fully grown date palm trees stationed on weighing lysimeters, sap flow as measured by the modified sensors was compared with the actual transpiration. Sap flow was found to be well correlated with transpiration, especially when using a recent calibration equation rather than the original Granier equation. Furthermore, inducing the axial variability of the sap flux densities was found to be highly important for accurate assessments of transpiration by sap flow measurements. The sensors indicated no transpiration at night, a high increase of transpiration from 06:00 to 09:00, maximum transpiration at 12:00, followed by a moderate reduction until 08:00; when transpiration ceased. These results were reinforced by the lysimeters' output. Reduced sap flux densities were detected at the stem's mantle when compared with its center. These results were reinforced by mechanistic measurements of the stem's specific hydraulic conductivity. Variance on the vertical axis was also observed, indicating an accelerated flow towards the upper parts of the tree and raising a hypothesis concerning dehydrating mechanisms of the date palm tree. Finally, the sensors indicated reduction in flow almost immediately after irrigation of field-grown trees was withheld, at a time when no climatic or phenological conditions could have led to reduction in transpiration. PMID:22887479
Jamilis, Martn; Garelli, Fabricio; Mozumder, Md Salatul Islam; Castaeda, Teresita; De Battista, Hernn
2015-10-01
This paper addresses the estimation of the specific production rate of intracellular products and the modeling of the bioreactor volume dynamics in high cell density fed-batch reactors. In particular, a new model for the bioreactor volume is proposed, suitable to be used in high cell density cultures where large amounts of intracellular products are stored. Based on the proposed volume model, two forms of a high-order sliding mode observer are proposed. Each form corresponds to the cases with residual biomass concentration or volume measurement, respectively. The observers achieve finite time convergence and robustness to process uncertainties as the kinetic model is not required. Stability proofs for the proposed observer are given. The observer algorithm is assessed numerically and experimentally. PMID:26149912
Decreased values of cosmic dust number density estimates in the Solar System
NASA Astrophysics Data System (ADS)
Willis, M. J.; Burchell, M. J.; Ahrens, T. J.; Krüger, H.; Grün, E.
2005-08-01
Experiments to investigate the effect of impacts on side-walls of dust detectors such as the present NASA/ESA Galileo/Ulysses instrument are reported. Side walls constitute 27% of the internal area of these instruments, and increase field of view from 140° to 180°. Impact of cosmic dust particles onto Galileo/Ulysses Al side walls was simulated by firing Fe particles, 0.5-5 μm diameter, 2-50 km s -1, onto an Al plate, simulating the targets of Galileo and Ulysses dust instruments. Since side wall impacts affect the rise time of the target ionization signal, the degree to which particle fluxes are overestimated varies with velocity. Side-wall impacts at particle velocities of 2-20 km s -1 yield rise times 10-30% longer than for direct impacts, so that derived impact velocity is reduced by a factor of ˜2. Impacts on side wall at 20-50 km s -1 reduced rise times by a factor of ˜10 relative to direct impact data. This would result in serious overestimates of flux of particles intersecting the dust instrument at velocities of 20-50 km s -1. Taking into account differences in laboratory calibration geometry we obtain the following percentages for previous overestimates of incident particle number density values from the Galileo instrument [Grün et al., 1992. The Galileo dust detector. Space Sci. Rev. 60, 317-340]: 55% for 2 km s -1 impacts, 27% at 10 km s -1 and 400% at 70 km s -1. We predict that individual particle masses are overestimated by ˜10-90% when side-wall impacts occur at 2-20 km s -1, and underestimated by ˜10-10 at 20-50 km s -1. We predict that wall impacts at 20-50 km s -1 can be identified in Galileo instrument data on account of their unusually short target rise times. The side-wall calibration is used to obtain new revised values [Krüger et al., 2000. A dust cloud of Ganymede maintained by hypervelocity impacts of interplanetary micrometeoroids. Planet. Space Sci. 48, 1457-1471; 2003. Impact-generated dust clouds surrounding the Galilean moons. Icarus 164, 170-187] of the Galilean satellite dust number densities of 9.4×10, 9.9×10, 4.1×10, and 6.8×10 m at 1 satellite radius from Io, Europa, Ganymede, and Callisto, respectively. Additionally, interplanetary particle number densities detected by the Galileo mission are found to be 1.6×10, 7.9×10, 3.2×10, 3.2×10, and 7.9×10 m at heliocentric distances of 0.7, 1, 2, 3, and 5 AU, respectively. Work by Burchell et al. [1999b. Acceleration of conducting polymer-coated latex particles as projectiles in hypervelocity impact experiments. J. Phys. D: Appl. Phys. 32, 1719-1728] suggests that low-density "fluffy" particles encountered by Ulysses will not significantly affect our results—further calibration would be useful to confirm this.
NASA Astrophysics Data System (ADS)
Kalimullina, L. R.; Nafikova, E. P.; Asfandiarov, N. L.; Chizhov, Yu. V.; Baibulova, G. Sh.; Zhdanov, E. R.; Gadiev, R. M.
2015-03-01
A number of compounds related to quinone derivatives is investigated by means of density functional theory in the B3LYP/6-31G(d) mode. Vertical electron affinity E va and/or electron affinity E a for the investigated compounds are known from experiments. The correlation between the calculated energies of ?* molecular orbitals with the E va values measured via electron transmission spectroscopy is determined with a coefficient of 0.96. It is established that theoretical values of the adiabatic electron affinity, calculated as the difference between the total energies of a neutral molecule and a radical anion, correlate with E a values determined from electron transfer experiments with a correlation coefficient of 0.996.
Optimal spectrum estimation of density operators with alkaline-earth atoms
NASA Astrophysics Data System (ADS)
Gorshkov, Alexey
2015-03-01
The eigenspectrum p --> ? (p1 ,p2 , . .pd) of the density operator ?& circ; describing the state of a quantum system can be used to characterize the entanglement of this system with its environment. In the seminal paper [Phys. Rev. A 64, 052311 (2001)], Keyl and Werner present the optimal measurement scheme for inferring p --> given n copies of an unknown state ?& circ;. Since this measurement uses a highly entangled basis over the full joint state ?& circ; ? n of all copies, it should naively be extremely difficult to implement in practice. In this talk, we give a simple experimental protocol to carry out the Keyl-Werner measurement for ?& circ; on the nuclear spin degrees of freedom of n alkaline-earth atoms using standard Ramsey spectroscopy techniques.
Estimation of effective hydrologic properties of soils from observations of vegetation density
NASA Technical Reports Server (NTRS)
Tellers, T. E.; Eagleson, P. S.
1980-01-01
A one-dimensional model of the annual water balance is reviewed. Improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate soil system, is verified through comparisons with observed data. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides verification of the soil-selection procedure. This method of parameterization of the land surface is useful with global circulation models, enabling them to account for both the nonlinearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.
Three-dimensional estimates of the coronal electron density at times of extreme solar activity
NASA Astrophysics Data System (ADS)
Butala, M. D.; Frazin, R. A.; Kamalabadi, F.
2005-09-01
This paper presents quantitative three-dimensional (3-D) reconstructions of the electron density (Ne) in the solar corona between 1.14 and 2.7 solar radii (R⊙) formed from polarized brightness (pB) measurements made by the Mauna Loa Solar Observatory Mark-IV (Mk4) K-coronameter at the time of the extreme solar events of October and November 2003. The 3-D reconstructions are made by a process called solar rotational tomography that exploits the view angles provided by solar rotation during a 2-week period. Although this method is incapable of resolving dynamic evolution on timescales of less than about 2 weeks, a qualitative comparison of the reconstructions to instantaneous Extreme ultraviolet Imaging Telescope images (EIT) shows good agreement between coronal holes, active regions, and "quiet Sun" structures on the disk and their counterparts in the corona at 1.2 R⊙.
NASA Astrophysics Data System (ADS)
Ivanchik, A. V.; Balashev, S. A.; Varshalovich, D. A.; Klimenko, V. V.
2015-02-01
Areview ofmolecular hydrogen H2 absorption systems identified in quasar spectra is presented. The analysis of such systems allows the determination of the chemical composition of the interstellar medium and the physical conditions existing in the early Universe, about 10-12 billioin years ago. To date, 27 molecular hydrogen systems have been found, nine of which show HD lines. An independent method for estimating the baryon density of the Universe is described, and is based on the analysis of the relative abundances of H2 and HD molecules. Among known H2/HD systems, only the two systems detected in Q1232+082 and Q0812+320 quasar spectra satisfy the condition of self-shielding of the absorbing cloud log . Under these conditions the local molecular fraction can reach unity, making it possible to estimate the relative deuterium abundance D/H using the ratio of the HD and H2 column densities N(HD) /2 N(H2). The analysis of the column densities for these two systems yields D/H = HD/2H2 = (3.26 0.29) 10-5. Comparison of this result with the prediction of BBN theory for D/H enables the determination of the baryon density of the Universe: ?b h 2 = (0.0194 0.0011). This is somewhat lower than the values ?b h 2 = (0.0224 0.0012) and (0.0221 0.0003) obtained using other independent methods: (i) analysis of the relative D and H abundances in Lyman Limit Systems at high redshifts, and (ii) analysis of the anisotropy of the cosmic microwave background. Nevertheless, all three values agree within their 2 ? errors.
Rosado-Mendez, Ivan M.; Nam, Kibo; Hall, Timothy J.; Zagzebski, James A.
2013-01-01
Reported here is a phantom-based comparison of methods for determining the power spectral density of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing ?(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law ?(f)=?0f?, was estimated using a reference phantom method. The power spectral density as estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter estimation region. Errors were quantified by the bias and standard deviation of the ?0 and ? estimates, and by the overall power-law fit error. For parameter estimation regions larger than ~34 pulse lengths (~1cm for this experiment), an overall power-law fit error of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the ?0 and ? estimates depended on the size of the parameter estimation region. Here the multitaper method reduced the standard deviation of the ?0 and ? estimates compared to those using the other techniques. Results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound. PMID:23858055
Estimation of Heavy Ion Densities From Linearly Polarized EMIC Waves At Earth
Kim, Eun-Hwa; Johnson, Jay R.; Lee, Dong-Hun
2014-02-24
Linearly polarized EMIC waves are expected to concentrate at the location where their wave frequency satisfies the ion-ion hybrid (IIH) resonance condition as the result of a mode conversion process. In this letter, we evaluate absorption coefficients at the IIH resonance in the Earth geosynchronous orbit for variable concentrations of helium and azimuthal and field-aligned wave numbers in dipole magnetic field. Although wave absorption occurs for a wide range of heavy ion concentration, it only occurs for a limited range of azimuthal and field-aligned wave numbers such that the IIH resonance frequency is close to, but not exactly the same as the crossover frequency. Our results suggest that, at L = 6.6, linearly polarized EMIC waves can be generated via mode conversion from the compressional waves near the crossover frequency. Consequently, the heavy ion concentration ratio can be estimated from observations of externally generated EMIC waves that have polarization.
Bell, David M; Ward, Eric J; Oishi, A Christopher; Oren, Ram; Flikkema, Paul G; Clark, James S
2015-07-01
Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as canopy conductance and transpiration. To address this need, we developed a hierarchical Bayesian State-Space Canopy Conductance (StaCC) model linking canopy conductance and transpiration to tree sap flux density from a 4-year experiment in the North Carolina Piedmont, USA. Our model builds on existing ecophysiological knowledge, but explicitly incorporates uncertainty in canopy conductance, internal tree hydraulics and observation error to improve estimation of canopy conductance responses to atmospheric drought (i.e., vapor pressure deficit), soil drought (i.e., soil moisture) and above canopy light. Our statistical framework not only predicted sap flux observations well, but it also allowed us to simultaneously gap-fill missing data as we made inference on canopy processes, marking a substantial advance over traditional methods. The predicted and observed sap flux data were highly correlated (mean sensor-level Pearson correlation coefficient = 0.88). Variations in canopy conductance and transpiration associated with environmental variation across days to years were many times greater than the variation associated with model uncertainties. Because some variables, such as vapor pressure deficit and soil moisture, were correlated at the scale of days to weeks, canopy conductance responses to individual environmental variables were difficult to interpret in isolation. Still, our results highlight the importance of accounting for uncertainty in models of ecophysiological and ecosystem function where the process of interest, canopy conductance in this case, is not observed directly. The StaCC modeling framework provides a statistically coherent approach to estimating canopy conductance and transpiration and propagating estimation uncertainty into ecosystem models, paving the way for improved prediction of water and carbon uptake responses to environmental change. PMID:26063709
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.
Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; Breitbart, Mya; Edwards, Robert A.
2015-05-08
Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set ofmore » publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. By adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution.« less
Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; Breitbart, Mya; Edwards, Robert A.
2015-05-08
Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set of publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. By adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution.
Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; Breitbart, Mya; Edwards, Robert A.
2015-01-01
Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set of publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. We propose adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution. PMID:26005436
Carbon pool densities and a first estimate of the total carbon pool in the Mongolian forest-steppe.
Dulamsuren, Choimaa; Klinge, Michael; Degener, Jan; Khishigjargal, Mookhor; Chenlemuge, Tselmeg; Bat-Enerel, Banzragch; Yeruult, Yolk; Saindovdon, Davaadorj; Ganbaatar, Kherlenchimeg; Tsogtbaatar, Jamsran; Leuschner, Christoph; Hauck, Markus
2016-02-01
The boreal forest biome represents one of the most important terrestrial carbon stores, which gave reason to intensive research on carbon stock densities. However, such an analysis does not yet exist for the southernmost Eurosiberian boreal forests in Inner Asia. Most of these forests are located in the Mongolian forest-steppe, which is largely dominated by Larix sibirica. We quantified the carbon stock density and total carbon pool of Mongolia's boreal forests and adjacent grasslands and draw conclusions on possible future change. Mean aboveground carbon stock density in the interior of L. sibirica forests was 66 Mg C ha(-1) , which is in the upper range of values reported from boreal forests and probably due to the comparably long growing season. The density of soil organic carbon (SOC, 108 Mg C ha(-1) ) and total belowground carbon density (149 Mg C ha(-1) ) are at the lower end of the range known from boreal forests, which might be the result of higher soil temperatures and a thinner permafrost layer than in the central and northern boreal forest belt. Land use effects are especially relevant at forest edges, where mean carbon stock density was 188 Mg C ha(-1) , compared with 215 Mg C ha(-1) in the forest interior. Carbon stock density in grasslands was 144 Mg C ha(-1) . Analysis of satellite imagery of the highly fragmented forest area in the forest-steppe zone showed that Mongolia's total boreal forest area is currently 73 818 km(2) , and 22% of this area refers to forest edges (defined as the first 30 m from the edge). The total forest carbon pool of Mongolia was estimated at ~ 1.5-1.7 Pg C, a value which is likely to decrease in future with increasing deforestation and fire frequency, and global warming. PMID:26463754
Mobile sailing robot for automatic estimation of fish density and monitoring water quality
2013-01-01
Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984
King, Tania L.; Thornton, Lukar E.; Bentley, Rebecca J.; Kavanagh, Anne M.
2015-01-01
Background Local destinations have previously been shown to be associated with higher levels of both physical activity and walking, but little is known about how the distribution of destinations is related to activity. Kernel density estimation is a spatial analysis technique that accounts for the location of features relative to each other. Using kernel density estimation, this study sought to investigate whether individuals who live near destinations (shops and service facilities) that are more intensely distributed rather than dispersed: 1) have higher odds of being sufficiently active; 2) engage in more frequent walking for transport and recreation. Methods The sample consisted of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Destinations within these areas were geocoded and kernel density estimates of destination intensity were created using kernels of 400m (meters), 800m and 1200m. Using multilevel logistic regression, the association between destination intensity (classified in quintiles Q1(least)—Q5(most)) and likelihood of: 1) being sufficiently active (compared to insufficiently active); 2) walking≥4/week (at least 4 times per week, compared to walking less), was estimated in models that were adjusted for potential confounders. Results For all kernel distances, there was a significantly greater likelihood of walking≥4/week, among respondents living in areas of greatest destinations intensity compared to areas with least destination intensity: 400m (Q4 OR 1.41 95%CI 1.02–1.96; Q5 OR 1.49 95%CI 1.06–2.09), 800m (Q4 OR 1.55, 95%CI 1.09–2.21; Q5, OR 1.71, 95%CI 1.18–2.48) and 1200m (Q4, OR 1.7, 95%CI 1.18–2.45; Q5, OR 1.86 95%CI 1.28–2.71). There was also evidence of associations between destination intensity and sufficient physical activity, however these associations were markedly attenuated when walking was included in the models. Conclusions This study, conducted within urban Melbourne, found that those who lived in areas of greater destination intensity walked more frequently, and showed higher odds of being sufficiently physically active–an effect that was largely explained by levels of walking. The results suggest that increasing the intensity of destinations in areas where they are more dispersed; and or planning neighborhoods with greater destination intensity, may increase residents’ likelihood of being sufficiently active for health. PMID:26355848
Martínez-Reina, Javier; Ojeda, Joaquín; Mayo, Juana
2016-01-01
Bone remodelling models are widely used in a phenomenological manner to estimate numerically the distribution of apparent density in bones from the loads they are daily subjected to. These simulations start from an arbitrary initial distribution, usually homogeneous, and the density changes locally until a bone remodelling equilibrium is achieved. The bone response to mechanical stimulus is traditionally formulated with a mathematical relation that considers the existence of a range of stimulus, called dead or lazy zone, for which no net bone mass change occurs. Implementing a relation like that leads to different solutions depending on the starting density. The non-uniqueness of the solution has been shown in this paper using two different bone remodelling models: one isotropic and another anisotropic. It has also been shown that the problem of non-uniqueness is only mitigated by removing the dead zone, but it is not completely solved unless the bone formation and bone resorption rates are limited to certain maximum values. PMID:26859888
Xu, Kuidong; Du, Yongfen; Lei, Yanli; Dai, Renhai
2010-11-01
Methodological impediments have long been the main problem in estimating the ecological role of marine benthic ciliates. Percoll density centrifugation is currently the most efficient technique for extracting ciliates from fine-grained sediments, while the high cost and low density of Percoll limit its wide application. We developed a protocol of density gradient centrifugation using the cheap sol Ludox HS 40 in combination with the quantitative protargol stain (QPS) to enumerate and identify marine benthic ciliates. The combined Ludox-QPS method involves sample collection and salt reduction, extraction with Ludox centrifugation, and preparation with the QPS technique. The recovery efficiency of Ludox was first tested with azoic sandy and muddy sediments. A 94-100% recovery rate of ciliates was reached. The method was further tested with natural sandy, muddy-sand and muddy sediments. Excellent extraction efficiencies were consistently obtained: an average of 97.6% for ciliates in sand, and 96.9-97.8% for nematodes in the three types of sediments. The high efficiencies indicate that the method allows for simultaneous enumeration of micro- and meiobenthos. Advantages of the new method include: (i) reliable and cost-efficient operation; (ii) appropriate centrifugation for both micro- and meiobenthos; and (iii) applicability to large samples and routine ecological surveys. PMID:20843673
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Li, Cuiping; West, Erik; Sak, Mark; Gavenonis, Sara C.; Duric, Nebojsa; Maidment, Andrew D. A.
2011-03-01
Breast density descriptors were estimated from ultrasound tomography (UST) and digital mammogram (DM) images of 46 anthropomorphic software breast phantoms. Each phantom simulated a 450 ml or 700 ml breast with volumetric percent density (PD) values between 10% and 50%. The UST based volumetric breast density (VBD) estimates were calculated by thresholding the reconstructed UST images. Percent density (PD) values from DM images were estimated interactively by a clinical breast radiologist using Cumulus software. Such obtained UST VBD and Cumulus PD estimates were compared with the ground truth VBD values available from phantoms. The UST VBD values showed a high correlation with the ground truth, as evidenced by the Pearson correlation coefficient of r=0.93. The Cumulus PD values also showed a high correlation with the ground truth (r=0.84), as well as with the UST VBD values (r=0.78). The consistency in measuring the UST VBD and Cumulus PD values was analyzed using the standard error of the estimation by linear regression (?E). The ?E value for Cumulus PD was 1.5 times higher compared to the UST VBD (6.54 vs. 4.21). The ?E calculated from two repeated Cumulus estimation sessions (?E=4.66) was comparable with the UST. Potential sources of the observed errors in density measurement are the use of global thresholding and (for Cumulus) the human observer variability. This preliminary study of simulated phantom UST images showed promise for non-invasive estimation of breast density.
NASA Astrophysics Data System (ADS)
Ebrahimi, A.; Habibi Khorassani, S. M.; Delarami, H.
2009-11-01
Individual hydrogen bond (HB) energies have been estimated in several systems involving multiple HBs such as adenine-thymine and guanine-cytosine using electron charge densities calculated at X⋯H hydrogen bond critical points (HBCPs) by atoms in molecules (AIM) method at B3LYP/6-311++G ?? and MP2/6-311++G ?? levels. A symmetrical system with two identical H bonds has been selected to search for simple relations between ?HBCP and individual EHB. Correlation coefficient between EHB and ?HBCP in the base of linear, quadratic, and exponential equations are acceptable and equal to 0.95. The estimated individual binding energies EHB are in good agreement with the results of atom-replacement approach and natural bond orbital analysis (NBO). The EHB values estimated from ? values at H⋯X BCP are in satisfactory agreement with the main geometrical parameter H⋯X. With respect to the obtained individual binding energies, the strength of a HB depends on the substituent and the cooperative effects of other HBs.
NASA Astrophysics Data System (ADS)
Hloupis, G.; Vallianatos, F.
2015-09-01
The purpose of this study is to demonstrate the use of wavelet transform (WT) as the common processing tool for earthquake's rapid magnitude determination and epicentral estimation. The goal is to use the same set of wavelet coefficients that characterize the seismogram (and especially its P-wave portion) to use one technique (WT) for double use (magnitude and location estimation). Wavelet magnitude estimation (WME) is used to derive a scaling relation between earthquake's magnitude and wavelet coefficients for South Aegean using data from 469 events with magnitudes from 3.8 to 6.9. The performance of the proposed relation was evaluated using data from 40 additional events with magnitude from 3.8 to 6.2. In addition, the epicentral estimation is achieved by a new proposed method (wavelet epicentral estimationWEpE) which is based on the combination of wavelet azimuth estimation and two stations' sub array method. Following the performance investigation of WEpE method, we present results and simulations with real data from characteristic events that occurred in South Aegean. Both methods can be run in parallel, providing in this way a suitable core of a regional earthquake early warning system in South Aegean.
NASA Technical Reports Server (NTRS)
Weaver, W. L.; Green, R. N.
1980-01-01
A study was performed on the use of geometric shape factors to estimate earth-emitted flux densities from radiation measurements with wide field-of-view flat-plate radiometers on satellites. Sets of simulated irradiance measurements were computed for unrestricted and restricted field-of-view detectors. In these simulations, the earth radiation field was modeled using data from Nimbus 2 and 3. Geometric shape factors were derived and applied to these data to estimate flux densities on global and zonal scales. For measurements at a satellite altitude of 600 km, estimates of zonal flux density were in error 1.0 to 1.2%, and global flux density errors were less than 0.2%. Estimates with unrestricted field-of-view detectors were about the same for Lambertian and non-Lambertian radiation models, but were affected by satellite altitude. The opposite was found for the restricted field-of-view detectors.
Kim, Mijin; Hyun, Seunghun; Kwon, Jung-Hwan
2015-10-01
The accumulation of marine plastic debris is one of the main emerging environmental issues of the twenty first century. Numerous studies in recent decades have reported the level of plastic particles on the beaches and in oceans worldwide. However, it is still unclear how much plastic debris remains in the marine environment because the sampling methods for identifying and quantifying plastics from the environment have not been standardized; moreover, the methods are not guaranteed to find all of the plastics that do remain. The level of identified marine plastic debris may explain only the small portion of remaining plastics. To perform a quantitative estimation of remaining plastics, a mass balance analysis was performed for high- and low-