Science.gov

Sample records for wavelet-based density estimation

  1. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  2. Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation

    NASA Astrophysics Data System (ADS)

    Lei, Sheau-Fang; Tung, Ying-Kai

    Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.

  3. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  4. Wavelet based approach for posture transition estimation using a waist worn accelerometer.

    PubMed

    Bidargaddi, Niranjan; Klingbeil, Lasse; Sarela, Antti; Boyle, Justin; Cheung, Vivian; Yelland, Catherine; Karunanithi, Mohanraj; Gray, Len

    2007-01-01

    The ability to rise from a chair is considered to be important to achieve functional independence and quality of life. This sit-to-stand task is also a good indicator to assess condition of patients with chronic diseases. We developed a wavelet based algorithm for detecting and calculating the durations of sit-to-stand and stand-to-sit transitions from the signal vector magnitude of the measured acceleration signal. The algorithm was tested on waist worn accelerometer data collected from young subjects as well as geriatric patients. The test demonstrates that both transitions can be detected by using wavelet transformation applied to signal magnitude vector. Wavelet analysis produces an estimate of the transition pattern that can be used to calculate the transition duration that further gives clinically significant information on the patients condition. The method can be applied in a real life ambulatory monitoring system for assessing the condition of a patient living at home. PMID:18002349

  5. Direct Density Derivative Estimation.

    PubMed

    Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi

    2016-06-01

    Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943

  6. Wavelet-based Evapotranspiration Forecasts

    NASA Astrophysics Data System (ADS)

    Bachour, R.; Maslova, I.; Ticlavilca, A. M.; McKee, M.; Walker, W.

    2012-12-01

    Providing a reliable short-term forecast of evapotranspiration (ET) could be a valuable element for improving the efficiency of irrigation water delivery systems. In the last decade, wavelet transform has become a useful technique for analyzing the frequency domain of hydrological time series. This study shows how wavelet transform can be used to access statistical properties of evapotranspiration. The objective of the research reported here is to use wavelet-based techniques to forecast ET up to 16 days ahead, which corresponds to the LANDSAT 7 overpass cycle. The properties of the ET time series, both physical and statistical, are examined in the time and frequency domains. We use the information about the energy decomposition in the wavelet domain to extract meaningful components that are used as inputs for ET forecasting models. Seasonal autoregressive integrated moving average (SARIMA) and multivariate relevance vector machine (MVRVM) models are coupled with the wavelet-based multiresolution analysis (MRA) results and used to generate short-term ET forecasts. Accuracy of the models is estimated and model robustness is evaluated using the bootstrap approach.

  7. Improved total variation algorithms for wavelet-based denoising

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2007-04-01

    Many improvements of wavelet-based restoration techniques suggest the use of the total variation (TV) algorithm. The concept of combining wavelet and total variation methods seems effective but the reasons for the success of this combination have been so far poorly understood. We propose a variation of the total variation method designed to avoid artifacts such as oil painting effects and is more suited than the standard TV techniques to be implemented with wavelet-based estimates. We then illustrate the effectiveness of this new TV-based method using some of the latest wavelet transforms such as contourlets and shearlets.

  8. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  9. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  10. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  11. Density estimation in wildlife surveys

    USGS Publications Warehouse

    Bart, J.; Droege, S.; Geissler, P.; Peterjohn, B.; Ralph, C.J.

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

  12. Wavelet-based multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Dian-Ting; Zhou, Xiao-Dan; Wang, Cheng-Wen

    2008-09-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (IR) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  13. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  14. Adaptive density estimator for galaxy surveys

    NASA Astrophysics Data System (ADS)

    Saar, Enn

    2016-10-01

    Galaxy number or luminosity density serves as a basis for many structure classification algorithms. Several methods are used to estimate this density. Among them kernel methods have probably the best statistical properties and allow also to estimate the local sample errors of the estimate. We introduce a kernel density estimator with an adaptive data-driven anisotropic kernel, describe its properties and demonstrate the wealth of additional information it gives us about the local properties of the galaxy distribution.

  15. Hydrologic regionalization using wavelet-based multiscale entropy method

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Maheswaran, R.; Sehgal, V.; Khosa, R.; Sivakumar, B.; Bernhofer, C.

    2016-07-01

    Catchment regionalization is an important step in estimating hydrologic parameters of ungaged basins. This paper proposes a multiscale entropy method using wavelet transform and k-means based hybrid approach for clustering of hydrologic catchments. Multi-resolution wavelet transform of a time series reveals structure, which is often obscured in streamflow records, by permitting gross and fine features of a signal to be separated. Wavelet-based Multiscale Entropy (WME) is a measure of randomness of the given time series at different timescales. In this study, streamflow records observed during 1951-2002 at 530 selected catchments throughout the United States are used to test the proposed regionalization framework. Further, based on the pattern of entropy across multiple scales, each cluster is given an entropy signature that provides an approximation of the entropy pattern of the streamflow data in each cluster. The tests for homogeneity reveals that the proposed approach works very well in regionalization.

  16. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  17. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  18. Sparse Density Estimation on the Multinomial Manifold.

    PubMed

    Hong, Xia; Gao, Junbin; Chen, Sheng; Zia, Tanveer

    2015-11-01

    A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators. PMID:25647665

  19. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  20. Quantum statistical inference for density estimation

    SciTech Connect

    Silver, R.N.; Martz, H.F.; Wallstrom, T.

    1993-11-01

    A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inverse problems; the penalty function is a convex information divergence enforcing global smoothing toward default models, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing may be enforced by constraints on the expectation values of differential operators. Although the hyperparameters, covariance, and linear response to perturbations can be estimated by a variety of statistical methods, we develop the Bayesian interpretation. The linear response of the MAP estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood. The method is demonstrated on standard data sets.

  1. Wavelet-based approach to character skeleton.

    PubMed

    You, Xinge; Tang, Yuan Yan

    2007-05-01

    Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform.

  2. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

  3. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds

  4. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds

  5. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds

  6. Wavelet-based adaptive denoising and baseline correction for MALDI TOF MS.

    PubMed

    Shin, Hyunjin; Sampat, Mehul P; Koomen, John M; Markey, Mia K

    2010-06-01

    Proteomic profiling by MALDI TOF mass spectrometry (MS) is an effective method for identifying biomarkers from human serum/plasma, but the process is complicated by the presence of noise in the spectra. In MALDI TOF MS, the major noise source is chemical noise, which is defined as the interference from matrix material and its clusters. Because chemical noise is nonstationary and nonwhite, wavelet-based denoising is more effective than conventional noise reduction schemes based on Fourier analysis. However, current wavelet-based denoising methods for mass spectrometry do not fully consider the characteristics of chemical noise. In this article, we propose new wavelet-based high-frequency noise reduction and baseline correction methods that were designed based on the discrete stationary wavelet transform. The high-frequency noise reduction algorithm adaptively estimates the time-varying threshold for each frequency subband from multiple realizations of chemical noise and removes noise from mass spectra of samples using the estimated thresholds. The baseline correction algorithm computes the monotonically decreasing baseline in the highest approximation of the wavelet domain. The experimental results demonstrate that our algorithms effectively remove artifacts in mass spectra that are due to chemical noise while preserving informative features as compared to commonly used denoising methods.

  7. A wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Wang, Yuan 1Ji, Yanju 2Li, Suyi 13Lin, Jun 12Zhou, Fengdao 1Yang, Guihong

    2013-09-01

    A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an

  8. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  9. A wavelet based investigation of long memory in stock returns

    NASA Astrophysics Data System (ADS)

    Tan, Pei P.; Galagedera, Don U. A.; Maharaj, Elizabeth A.

    2012-04-01

    Using a wavelet-based maximum likelihood fractional integration estimator, we test long memory (return predictability) in the returns at the market, industry and firm level. In an analysis of emerging market daily returns over the full sample period, we find that long-memory is not present and in approximately twenty percent of 175 stocks there is evidence of long memory. The absence of long memory in the market returns may be a consequence of contemporaneous aggregation of stock returns. However, when the analysis is carried out with rolling windows evidence of long memory is observed in certain time frames. These results are largely consistent with that of detrended fluctuation analysis. A test of firm-level information in explaining stock return predictability using a logistic regression model reveal that returns of large firms are more likely to possess long memory feature than in the returns of small firms. There is no evidence to suggest that turnover, earnings per share, book-to-market ratio, systematic risk and abnormal return with respect to the market model is associated with return predictability. However, degree of long-range dependence appears to be associated positively with earnings per share, systematic risk and abnormal return and negatively with book-to-market ratio.

  10. Density estimation with non-parametric methods

    NASA Astrophysics Data System (ADS)

    Fadda, D.; Slezak, E.; Bijaoui, A.

    1998-01-01

    One key issue in several astrophysical problems is the evaluation of the density probability function underlying an observational discrete data set. We here review two non-parametric density estimators which recently appeared in the astrophysical literature, namely the adaptive kernel density estimator and the Maximum Penalized Likelihood technique, and describe another method based on the wavelet transform. The efficiency of these estimators is tested by using extensive numerical simulations in the one-dimensional case. The results are in good agreement with theoretical functions and the three methods appear to yield consistent estimates. However, the Maximum Penalized Likelihood suffers from a lack of resolution and high computational cost due to its dependency on a minimization algorithm. The small differences between kernel and wavelet estimates are mainly explained by the ability of the wavelet method to take into account local gaps in the data distribution. This new approach is very promising, since smaller structures superimposed onto a larger one are detected only by this technique, especially when small samples are investigated. Thus, wavelet solutions appear to be better suited for subclustering studies. Nevertheless, kernel estimates seem more robust and are reliable solutions although some small-scale details can be missed. In order to check these estimators with respect to previous studies, two galaxy redshift samples, related to the galaxy cluster A3526 and to the Corona Borealis region, have been analyzed. In both these cases claims for bimodality are confirmed at a high confidence level. The complete version of this paper with the whole set of figures can be accessed from the electronic version of the A\\&A Suppl. Ser. managed by Editions de Physique as well as from the SISSA database (astro-ph/9704096).

  11. Mean Density Estimation derived from Satellite Constellations

    NASA Astrophysics Data System (ADS)

    Li, A.; Close, S.

    2015-12-01

    With the advent of nanosatellite constellations, we define here a new method to derive neutral densities of the lower thermosphere from multiple similar platforms travelling through same regions of space. Because of similar orbits, the satellites are expected to encounter similar mean neutral densities and hence experience similar drag if their drag coefficients are equivalent. Utilizing free molecular flow theory to bound the minimum possible drag coefficient possible and order statistics to give a statistical picture of the distribution, we are able to estimate the neutral density alongside its associated error bounds. Data sources for this methodology can either be from already established Two Line Elements (TLEs) or from raw data sources, in which an additional filtering step needs to be performed to estimate relevant parameters. The effects of error in the filtering step of the methodology are also discussed and can be removed if the error distribution is Gaussian in nature. This method does not depend on prior models of the atmosphere, but instead is based upon physics models of simple shapes in free molecular flow. With a constellation of 10 satellites, we can achieve a standard deviation of roughly 4% on the estimated mean neutral density. As additional satellites are included in the estimation scheme, the result converges towards the lower limit of the achievable drag coefficient, and accuracy becomes limited by the quality of the ranging measurements and the probability of the accommodation coefficient. Data is provided courtesy of Planet Labs and comparisons are made to existing atmospheric models such as NRLMSISE-00 and JB2006.

  12. Coding sequence density estimation via topological pressure.

    PubMed

    Koslicki, David; Thompson, Daniel J

    2015-01-01

    We give a new approach to coding sequence (CDS) density estimation in genomic analysis based on the topological pressure, which we develop from a well known concept in ergodic theory. Topological pressure measures the 'weighted information content' of a finite word, and incorporates 64 parameters which can be interpreted as a choice of weight for each nucleotide triplet. We train the parameters so that the topological pressure fits the observed coding sequence density on the human genome, and use this to give ab initio predictions of CDS density over windows of size around 66,000 bp on the genomes of Mus Musculus, Rhesus Macaque and Drososphilia Melanogaster. While the differences between these genomes are too great to expect that training on the human genome could predict, for example, the exact locations of genes, we demonstrate that our method gives reasonable estimates for the 'coarse scale' problem of predicting CDS density. Inspired again by ergodic theory, the weightings of the nucleotide triplets obtained from our training procedure are used to define a probability distribution on finite sequences, which can be used to distinguish between intron and exon sequences from the human genome of lengths between 750 and 5,000 bp. At the end of the paper, we explain the theoretical underpinning for our approach, which is the theory of Thermodynamic Formalism from the dynamical systems literature. Mathematica and MATLAB implementations of our method are available at http://sourceforge.net/projects/topologicalpres/ . PMID:24448658

  13. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant

  14. Estimating stellar mean density through seismic inversions

    NASA Astrophysics Data System (ADS)

    Reese, D. R.; Marques, J. P.; Goupil, M. J.; Thompson, M. J.; Deheuvels, S.

    2012-03-01

    Context. Determining the mass of stars is crucial both for improving stellar evolution theory and for characterising exoplanetary systems. Asteroseismology offers a promising way for estimating the stellar mean density. When combined with accurate radii determinations, such as are expected from Gaia, this yields accurate stellar masses. The main difficulty is finding the best way to extract the mean density of a star from a set of observed frequencies. Aims: We seek to establish a new method for estimating the stellar mean density, which combines the simplicity of a scaling law while providing the accuracy of an inversion technique. Methods: We provide a framework in which to construct and evaluate kernel-based linear inversions that directly yield the mean density of a star. We then describe three different inversion techniques (SOLA and two scaling laws) and apply them to the Sun, several test cases and three stars, α Cen B, HD 49933 and HD 49385, two of which are observed by CoRoT. Results: The SOLA (subtractive optimally localised averages) approach and the scaling law based on the surface correcting technique described by Kjeldsen et al. (2008, ApJ, 683, L175) yield comparable results that can reach an accuracy of 0.5% and are better than scaling the large frequency separation. The reason for this is that the averaging kernels from the two first methods are comparable in quality and are better than what is obtained with the large frequency separation. It is also shown that scaling the large frequency separation is more sensitive to near-surface effects, but is much less affected by an incorrect mode identification. As a result, one can identify pulsation modes by looking for an ℓ and n assignment which provides the best agreement between the results from the large frequency separation and those from one of the two other methods. Non-linear effects are also discussed, as is the effects of mixed modes. In particular, we show that mixed modes bring little

  15. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  16. Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints

    SciTech Connect

    I. W. Ginsberg

    1999-09-01

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.

  17. Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.

    PubMed

    Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi

    2016-07-01

    Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering.

  18. Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.

    PubMed

    Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi

    2016-07-01

    Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering. PMID:27171983

  19. Density Estimations in Laboratory Debris Flow Experiments

    NASA Astrophysics Data System (ADS)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Malcherek, Andreas; Fischer, Jan-Thomas; Pudasaini, Shiva P.

    2016-04-01

    Bulk density and its variation is an important physical quantity to estimate the solid-liquid fractions in two-phase debris flows. Here we present mass and flow depth measurements for experiments performed in a large-scale laboratory set up. Once the mixture is released and it moves down the inclined channel, measurements allow us to determine the bulk density evolution throughout the debris flow. Flow depths are determined by ultrasonic pulse reflection, and the mass is measured with a total normal force sensor. The data were obtained at 50 Hz. The initial two phase material was composed of 350 kg debris with water content of 40%. A very fine pebble with mean particle diameter of 3 mm, particle density of 2760 kg/m³ and bulk density of 1400 kg/m³ in dry condition was chosen as the solid material. Measurements reveal that the debris bulk density remains high from the head to the middle of the debris body whereas it drops substantially at the tail. This indicates lower water content at the tail, compared to the head and the middle portion of the debris body. This means that the solid and fluid fractions are varying strongly in a non-linear manner along the flow path, and from the head to the tail of the debris mass. Importantly, this spatial-temporal density variation plays a crucial role in determining the impact forces associated with the dynamics of the flow. Our setup allows for investigating different two phase material compositions, including large fluid fractions, with high resolutions. The considered experimental set up may enable us to transfer the observed phenomena to natural large-scale events. Furthermore, the measurement data allows evaluating results of numerical two-phase mass flow simulations. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural

  20. Wavelet-based regularity analysis reveals Recurrent Spatiotemporal Behavior in Resting-state fMRI

    PubMed Central

    Smith, Robert X.; Jann, Kay; Ances, Beau; Wang, Danny J.J.

    2015-01-01

    One of the major findings from multi-modal neuroimaging studies in the past decade is that the human brain is anatomically and functionally organized into large-scale networks. In resting state fMRI (rs-fMRI), spatial patterns emerge when temporal correlations between various brain regions are tallied, evidencing networks of ongoing intercortical cooperation. However, the dynamic structure governing the brain’s spontaneous activity is far less understood due to the short and noisy nature of the rs-fMRI signal. Here we develop a wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform to measure recurrent temporal pattern stability within the rs-fMRI signal across multiple temporal scales. The method consists of performing a stationary wavelet transform (SWT) to preserve signal structure, followed by construction of “lagged” subsequences to adjust for correlated features, and finally the calculation of sample entropy across wavelet scales based on an “objective” estimate of noise level at each scale. We found that the brain’s default mode network (DMN) areas manifest a higher level of irregularity in rs-fMRI time series than rest of the brain. In 25 aged subjects with mild cognitive impairment and 25 matched healthy controls, wavelet based regularity analysis showed improved sensitivity in detecting changes in the regularity of rs-fMRI signals between the two groups within the DMN and executive control networks, compared to standard multiscale entropy analysis. Wavelet based regularity analysis based on noise estimation capabilities of the wavelet transform is a promising technique to characterize the dynamic structure of rs-fMRI as well as other biological signals. PMID:26096080

  1. Wavelet-based regularity analysis reveals recurrent spatiotemporal behavior in resting-state fMRI.

    PubMed

    Smith, Robert X; Jann, Kay; Ances, Beau; Wang, Danny J J

    2015-09-01

    One of the major findings from multimodal neuroimaging studies in the past decade is that the human brain is anatomically and functionally organized into large-scale networks. In resting state fMRI (rs-fMRI), spatial patterns emerge when temporal correlations between various brain regions are tallied, evidencing networks of ongoing intercortical cooperation. However, the dynamic structure governing the brain's spontaneous activity is far less understood due to the short and noisy nature of the rs-fMRI signal. Here, we develop a wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform to measure recurrent temporal pattern stability within the rs-fMRI signal across multiple temporal scales. The method consists of performing a stationary wavelet transform to preserve signal structure, followed by construction of "lagged" subsequences to adjust for correlated features, and finally the calculation of sample entropy across wavelet scales based on an "objective" estimate of noise level at each scale. We found that the brain's default mode network (DMN) areas manifest a higher level of irregularity in rs-fMRI time series than rest of the brain. In 25 aged subjects with mild cognitive impairment and 25 matched healthy controls, wavelet-based regularity analysis showed improved sensitivity in detecting changes in the regularity of rs-fMRI signals between the two groups within the DMN and executive control networks, compared with standard multiscale entropy analysis. Wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform is a promising technique to characterize the dynamic structure of rs-fMRI as well as other biological signals.

  2. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  3. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  4. Analysis of a wavelet-based robust hash algorithm

    NASA Astrophysics Data System (ADS)

    Meixner, Albert; Uhl, Andreas

    2004-06-01

    This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

  5. Wavelet-based verification of the quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  6. Traffic characterization and modeling of wavelet-based VBR encoded video

    SciTech Connect

    Yu Kuo; Jabbari, B.; Zafar, S.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  7. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  8. Characterizing cerebrovascular dynamics with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Sindeev, S. S.; Pavlova, O. N.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.

    2016-01-01

    Using the wavelet-transform modulus maxima (WTMM) approach we study the dynamics of cerebral blood flow (CBF) in rats aiming to reveal responses of macro- and microcerebral circulations to changes in the peripheral blood pressure. We show that the wavelet-based multifractal formalism allows quantifying essentially different reactions in the CBF-dynamics at the level of large and small cerebral vessels. We conclude that unlike the macrocirculation that is nearly insensitive to increased peripheral blood pressure, the microcirculation is characterized by essential changes of the CBF-complexity.

  9. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity.

  10. Wavelet based characterization of ex vivo vertebral trabecular bone structure with 3T MRI compared to microCT

    SciTech Connect

    Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

    2005-04-11

    Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

  11. Using densities of estimators to compare pharmacokinetic experiments.

    PubMed

    Pronzato, L; Pázman, A

    2001-05-01

    Different designs of experiments are compared according to the shape, concentration, etc. of the densities of the least-squares estimator (conditional densities, marginal densities). In contrast with D'Argenio (J. Pharmacokinetics Biopharmaceutics 9(16) (1981) 739-756), where marginal densities have been obtained by simulations, we present here a faster procedure based on well-elaborated approximations of the densities. The stress is on the graphical presentation of the results.

  12. Conditional probability density function estimation with sigmoidal neural networks.

    PubMed

    Sarajedini, A; Hecht-Nielsen, R; Chau, P M

    1999-01-01

    Real-world problems can often be couched in terms of conditional probability density function estimation. In particular, pattern recognition, signal detection, and financial prediction are among the multitude of applications requiring conditional density estimation. Previous developments in this direction have used neural nets to estimate statistics of the distribution or the marginal or joint distributions of the input-output variables. We have modified the joint distribution estimating sigmoidal neural network to estimate the conditional distribution. Thus, the probability density of the output conditioned on the inputs is estimated using a neural network. We have derived and implemented the learning laws to train the network. We show that this network has computational advantages over a brute force ratio of joint and marginal distributions. We also compare its performance to a kernel conditional density estimator in a larger scale (higher dimensional) problem simulating more realistic conditions.

  13. A Wavelet-Based Approach to Fall Detection

    PubMed Central

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  14. Wavelet-based moment invariants for pattern recognition

    NASA Astrophysics Data System (ADS)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  15. A wavelet-based approach to fall detection.

    PubMed

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the "prototype fall".In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  16. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  17. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  18. Quantitative comparison of estimations for the density within pedestrian streams

    NASA Astrophysics Data System (ADS)

    Tordeux, Antoine; Zhang, Jun; Steffen, Bernhard; Seyfried, Armin

    2015-06-01

    In this work, the precision of estimators for the density within unidirectional pedestrian streams is evaluated. The analysis is done in controllable systems where the density is homogeneous and all the characteristics are known. The objectives are to estimate the global density with local measurements or density profile at high spatial resolution with no bias and low fluctuations. The classical estimation using discrete numbers of observed pedestrians is compared to continuous estimators using spacing distance, Voronoi diagram, Gaussian kernel as well as maximum likelihood. Mean squared error and bias of the estimators are calculated from empirical data and Monte Carlo experiments. The results show quantitatively how continuous approaches improve the precision of the estimations.

  19. Morphology driven density distribution estimation for small bodies

    NASA Astrophysics Data System (ADS)

    Takahashi, Yu; Scheeres, D. J.

    2014-05-01

    We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field

  20. An Adaptive Wavelet-Based Denoising Algorithm for Enhancing Speech in Non-stationary Noise Environment

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Ching

    Traditional wavelet-based speech enhancement algorithms are ineffective in the presence of highly non-stationary noise because of the difficulties in the accurate estimation of the local noise spectrum. In this paper, a simple method of noise estimation employing the use of a voice activity detector is proposed. We can improve the output of a wavelet-based speech enhancement algorithm in the presence of random noise bursts according to the results of VAD decision. The noisy speech is first preprocessed using bark-scale wavelet packet decomposition (BSWPD) to convert a noisy signal into wavelet coefficients (WCs). It is found that the VAD using bark-scale spectral entropy, called as BS-Entropy, parameter is superior to other energy-based approach especially in variable noise-level. The wavelet coefficient threshold (WCT) of each subband is then temporally adjusted according to the result of VAD approach. In a speech-dominated frame, the speech is categorized into either a voiced frame or an unvoiced frame. A voiced frame possesses a strong tone-like spectrum in lower subbands, so that the WCs of lower-band must be reserved. On the contrary, the WCT tends to increase in lower-band if the speech is categorized as unvoiced. In a noise-dominated frame, the background noise can be almost completely removed by increasing the WCT. The objective and subjective experimental results are then used to evaluate the proposed system. The experiments show that this algorithm is valid on various noise conditions, especially for color noise and non-stationary noise conditions.

  1. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  2. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  3. Wavelet-based multifractal analysis of laser biopsy imagery

    NASA Astrophysics Data System (ADS)

    Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima

    2012-03-01

    In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.

  4. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  5. Efficient particle filtering via sparse kernel density estimation.

    PubMed

    Banerjee, Amit; Burlina, Philippe

    2010-09-01

    Particle filters (PFs) are Bayesian filters capable of modeling nonlinear, non-Gaussian, and nonstationary dynamical systems. Recent research in PFs has investigated ways to appropriately sample from the posterior distribution, maintain multiple hypotheses, and alleviate computational costs while preserving tracking accuracy. To address these issues, a novel utilization of the support vector data description (SVDD) density estimation method within the particle filtering framework is presented. The SVDD density estimate can be integrated into a wide range of PFs to realize several benefits. It yields a sparse representation of the posterior density that reduces the computational complexity of the PF. The proposed approach also provides an analytical expression for the posterior distribution that can be used to identify its modes for maintaining multiple hypotheses and computing the MAP estimate, and to directly sample from the posterior. We present several experiments that demonstrate the advantages of incorporating a sparse kernel density estimate in a particle filter.

  6. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  7. Evaluating parasite densities and estimation of parameters in transmission systems.

    PubMed

    Heinzmann, D; Torgerson, P R

    2008-09-01

    Mathematical modelling of parasite transmission systems can provide useful information about host parasite interactions and biology and parasite population dynamics. In addition good predictive models may assist in designing control programmes to reduce the burden of human and animal disease. Model building is only the first part of the process. These models then need to be confronted with data to obtain parameter estimates and the accuracy of these estimates has to be evaluated. Estimation of parasite densities is central to this. Parasite density estimates can include the proportion of hosts infected with parasites (prevalence) or estimates of the parasite biomass within the host population (abundance or intensity estimates). Parasite density estimation is often complicated by highly aggregated distributions of parasites within the hosts. This causes additional challenges when calculating transmission parameters. Using Echinococcus spp. as a model organism, this manuscript gives a brief overview of the types of descriptors of parasite densities, how to estimate them and on the use of these estimates in a transmission model.

  8. Evaluation of wolf density estimation from radiotelemetry data

    USGS Publications Warehouse

    Burch, J.W.; Adams, L.G.; Follmann, E.H.; Rexstad, E.A.

    2005-01-01

    Density estimation of wolves (Canis lupus) requires a count of individuals and an estimate of the area those individuals inhabit. With radiomarked wolves, the count is straightforward but estimation of the area is more difficult and often given inadequate attention. The population area, based on the mosaic of pack territories, is influenced by sampling intensity similar to the estimation of individual home ranges. If sampling intensity is low, population area will be underestimated and wolf density will be inflated. Using data from studies in Denali National Park and Preserve, Alaska, we investigated these relationships using Monte Carlo simulation to evaluate effects of radiolocation effort and number of marked packs on density estimation. As the number of adjoining pack home ranges increased, fewer relocations were necessary to define a given percentage of population area. We present recommendations for monitoring wolves via radiotelemetry.

  9. Estimating maritime snow density from seasonal climate variables

    NASA Astrophysics Data System (ADS)

    Bormann, K. J.; Evans, J. P.; Westra, S.; McCabe, M. F.; Painter, T. H.

    2013-12-01

    Snow density is a complex parameter that influences thermal, optical and mechanical snow properties and processes. Depth-integrated properties of snowpacks, including snow density, remain very difficult to obtain remotely. Observations of snow density are therefore limited to in-situ point locations. In maritime snowfields such as those in Australia and in parts of the western US, snow densification rates are enhanced and inter-annual variability is high compared to continental snow regions. In-situ snow observation networks in maritime climates often cannot characterise the variability in snowpack properties at spatial and temporal resolutions required for many modelling and observations-based applications. Regionalised density-time curves are commonly used to approximate snow densities over broad areas. However, these relationships have limited spatial applicability and do not allow for interannual variability in densification rates, which are important in maritime environments. Physically-based density models are relatively complex and rely on empirical algorithms derived from limited observations, which may not represent the variability observed in maritime snow. In this study, seasonal climate factors were used to estimate late season snow densities using multiple linear regressions. Daily snow density estimates were then obtained by projecting linearly to fresh snow densities at the start of the season. When applied spatially, the daily snow density fields compare well to in-situ observations across multiple sites in Australia, and provide a new method for extrapolating existing snow density datasets in maritime snow environments. While the relatively simple algorithm for estimating snow densities has been used in this study to constrain snowmelt rates in a temperature-index model, the estimates may also be used to incorporate variability in snow depth to snow water equivalent conversion.

  10. Mean thermospheric density estimation derived from satellite constellations

    NASA Astrophysics Data System (ADS)

    Li, Alan; Close, Sigrid

    2015-10-01

    This paper defines a method to estimate the mean neutral density of the thermosphere given many satellites of the same form factor travelling in similar regions of space. A priori information to the estimation scheme include ranging measurements and a general knowledge of the onboard ADACS, although precise measurements are not required for the latter. The estimation procedure seeks to utilize order statistics to estimate the probability of the minimum drag coefficient achievable, and amalgamating all measurements across multiple time periods allows estimation of the probability density of the ballistic factor itself. The model does not depend on prior models of the atmosphere; instead we require estimation of the minimum achievable drag coefficient which is based upon physics models of simple shapes in free molecular flow. From the statistics of the minimum, error statistics on the estimated atmospheric density can be calculated. Barring measurement errors from the ranging procedure itself, it is shown that with a constellation of 10 satellites, we can achieve a standard deviation of roughly 4% on the estimated mean neutral density. As more satellites are added to the constellation, the result converges towards the lower limit of the achievable drag coefficient, and accuracy becomes limited by the quality of the ranging measurements and the probability of the accommodation coefficient. Comparisons are made to existing atmospheric models such as NRLMSISE-00 and JB2006.

  11. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  12. Atmospheric Density Corrections Estimated from Fitted Drag Coefficients

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T. F.; Mance, S. R.; Mehta, P.

    2010-12-01

    Fitted drag coefficients estimated using GEODYN, the NASA Goddard Space Flight Center Precision Orbit Determination and Geodetic Parameter Estimation Program, are used to create density corrections. The drag coefficients were estimated for Stella, Starlette and GFZ using satellite laser ranging (SLR) measurements; and for GEOSAT Follow-On (GFO) using SLR, Doppler, and altimeter crossover measurements. The data analyzed covers years ranging from 2000 to 2004 for Stella and Starlette, 2000 to 2002 and 2005 for GFO, and 1995 to 1997 for GFZ. The drag coefficient was estimated every eight hours. The drag coefficients over the course of a year show a consistent variation about the theoretical and yearly average values that primarily represents a semi-annual/seasonal error in the atmospheric density models used. The atmospheric density models examined were NRLMSISE-00 and MSIS-86. The annual structure of the major variations was consistent among all the satellites for a given year and consistent among all the years examined. The fitted drag coefficients can be converted into density corrections every eight hours along the orbit of the satellites. In addition, drag coefficients estimated more frequently can provide a higher frequency of density correction.

  13. Open-cluster density profiles derived using a kernel estimator

    NASA Astrophysics Data System (ADS)

    Seleznev, Anton F.

    2016-03-01

    Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density profile for the cluster core and the uniform sphere for the cluster corona is shown to be a better approximation of the surface radial density profile.The King function alone does not reproduce surface density profiles of sample clusters properly. The number of stars, the cluster masses and the tidal radii in the Galactic gravitational field for the sample clusters are estimated. It is shown that NGC 6819 and 6939 are extended beyond their tidal surfaces.

  14. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  15. Complex wavelet based speckle reduction using multiple ultrasound images

    NASA Astrophysics Data System (ADS)

    Uddin, Muhammad Shahin; Tahtali, Murat; Pickering, Mark R.

    2014-04-01

    Ultrasound imaging is a dominant tool for diagnosis and evaluation in medical imaging systems. However, as its major limitation is that the images it produces suffer from low quality due to the presence of speckle noise, to provide better clinical diagnoses, reducing this noise is essential. The key purpose of a speckle reduction algorithm is to obtain a speckle-free high-quality image whilst preserving important anatomical features, such as sharp edges. As this can be better achieved using multiple ultrasound images rather than a single image, we introduce a complex wavelet-based algorithm for the speckle reduction and sharp edge preservation of two-dimensional (2D) ultrasound images using multiple ultrasound images. The proposed algorithm does not rely on straightforward averaging of multiple images but, rather, in each scale, overlapped wavelet detail coefficients are weighted using dynamic threshold values and then reconstructed by averaging. Validation of the proposed algorithm is carried out using simulated and real images with synthetic speckle noise and phantom data consisting of multiple ultrasound images, with the experimental results demonstrating that speckle noise is significantly reduced whilst sharp edges without discernible distortions are preserved. The proposed approach performs better both qualitatively and quantitatively than previous existing approaches.

  16. A wavelet-based method for multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian

    2012-06-01

    A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.

  17. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  18. Wavelet-based characterization of gait signal for neurological abnormalities.

    PubMed

    Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S

    2015-02-01

    Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. PMID:25661004

  19. Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

    NASA Astrophysics Data System (ADS)

    Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

    2011-05-01

    Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

  20. Coarse-to-fine wavelet-based airport detection

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun

    2015-10-01

    Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.

  1. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  2. Estimate of snow density knowing grain and share hardness

    NASA Astrophysics Data System (ADS)

    Valt, Mauro; Cianfarra, Paola; Cagnati, Anselmo; Chiambretti, Igor; Moro, Daniele

    2010-05-01

    Alpine avalanche warning services produces, weekly, snow profiles. Usually such profiles are made in horizontal snow fields, homogenously distributed by altitude and climatic micro-areas. Such profile allows grain shape, dimension and hardness (hand test) identification. Horizontal coring of each layer allows snow density identification. Such data allows the avalanche hazard evaluation and an estimation of the Snow Water Equivalent (SWE). Nevertheless the measurement of snow density, by coring, of very thin layers (less than 5 cm of thickness) is very difficult and are usually not measured by snow technicians. To bypass such problems a statistical analysis was performed to assign density values also to layers which cannot be measured. This system allows, knowing each layer thickness and its density, to correctly estimate SWE. This paper presents typical snow density values for snow hardness values and grain types for the Eastern Italian Alps. The study is based onto 2500 snow profiles with 17000 sampled snow layers from the Dolomites and Venetian Prealps (Eastern Alps). The table of typical snow density values for each grain type is used by YETI Software which elaborate snow profiles and automatically evaluate SWE. This method allows a better use of Avalanche Warning Services datasets for SWE estimation and local evaluation of SWE yearly trends for each snow field.

  3. Neutral wind estimation from 4-D ionospheric electron density images

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Crowley, G.; Curtis, N.

    2009-06-01

    We develop a new inversion algorithm for Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE method uses four-dimensional images of global electron density to estimate the field-aligned neutral wind ionospheric driver when direct measurement is not available. We begin with a model of the electron continuity equation that includes production and loss rate estimates, as well as E × B drift, gravity, and diffusion effects. We use ion, electron, and neutral species temperatures and neutral densities from the Thermosphere Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM-ASPEN) for estimating the magnitude of these effects. We then model the neutral wind as a power series at a given longitude for a range of latitudes and altitudes. As a test of our algorithm, we have input TIMEGCM electron densities to our algorithm. The model of the neutral wind is computed at hourly intervals and validated by comparing to the “true” TIMEGCM neutral wind fields. We show results for a storm day: 10 November 2004. The agreement between the winds derived from EMPIRE versus the TIMEGCM “true” winds appears to be time-dependent for the day under consideration. This may indicate that the diurnal variation in certain driving processes impacts the accuracy of our neutral wind model. Despite the potential temporal and spatial limits on accuracy, estimating neutral wind speed from measured electron density fields via our algorithm shows great promise as a complement to the more sparse radar and satellite measurements.

  4. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  5. Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

    PubMed

    Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

    2013-05-01

    Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired.

  6. A new approach for estimating the density of liquids

    NASA Astrophysics Data System (ADS)

    Sakagami, T.; Fuchizaki, K.; Ohara, K.

    2016-10-01

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  7. A new approach for estimating the density of liquids.

    PubMed

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-01

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed. PMID:27494268

  8. An infrastructureless approach to estimate vehicular density in urban environments.

    PubMed

    Sanguesa, Julio A; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J; Cano, Juan-Carlos; Calafate, Carlos T; Manzoni, Pietro

    2013-01-01

    In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

  9. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  10. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  11. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    PubMed

    Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  12. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    PubMed

    Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682

  13. Face Value: Towards Robust Estimates of Snow Leopard Densities

    PubMed Central

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682

  14. Density estimation in tiger populations: combining information for strong inference.

    PubMed

    Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W

    2012-07-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  15. Density estimation in tiger populations: combining information for strong inference

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  16. Wavelet-based multicomponent matching pursuit trace interpolation

    NASA Astrophysics Data System (ADS)

    Choi, Jihun; Byun, Joongmoo; Seol, Soon Jee; Kim, Young

    2016-06-01

    Typically, seismic data are sparsely and irregularly sampled due to limitations in the survey environment and these cause problems for key seismic processing steps such as surface-related multiple elimination or wave-equation based migration. Various interpolation techniques have been developed to alleviate the problems caused by sparse and irregular sampling. Among many interpolation techniques, matching pursuit interpolation is a robust tool to interpolate the regularly sampled data with large receiver separation such as crossline data in marine seismic acquisition when both pressure and particle velocity data are used. Multi-component matching pursuit methods generally used the sinusoidal basis function, which have shown to be effective for interpolating multi-component marine seismic data in the crossline direction. In this paper, we report the use of wavelet basis functions which further enhances the performance of matching pursuit methods for de-aliasing than sinusoidal basis functions. We also found that the range of the peak wavenumber of the wavelet is critical to the stability of the interpolation results and the de-aliasing performance and that the range should be determined based on Nyquist criteria. In addition, we reduced the computational cost by adopting the inner product of the wavelet and the input data to find the parameters of the wavelet basis function instead of using L-2 norm minimization. Using synthetic data, we illustrate that for aliased data, wavelet-based matching pursuit interpolation yields more stable results than sinusoidal function-based one when we use not only pressure data only but also both pressure and particle velocity together.

  17. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  18. Wavelet-based multicomponent matching pursuit trace interpolation

    NASA Astrophysics Data System (ADS)

    Choi, Jihun; Byun, Joongmoo; Seol, Soon Jee; Kim, Young

    2016-09-01

    Typically, seismic data are sparsely and irregularly sampled due to limitations in the survey environment and these cause problems for key seismic processing steps such as surface-related multiple elimination or wave-equation-based migration. Various interpolation techniques have been developed to alleviate the problems caused by sparse and irregular sampling. Among many interpolation techniques, matching pursuit interpolation is a robust tool to interpolate the regularly sampled data with large receiver separation such as crossline data in marine seismic acquisition when both pressure and particle velocity data are used. Multicomponent matching pursuit methods generally used the sinusoidal basis function, which have shown to be effective for interpolating multicomponent marine seismic data in the crossline direction. In this paper, we report the use of wavelet basis functions which further enhances the performance of matching pursuit methods for de-aliasing than sinusoidal basis functions. We also found that the range of the peak wavenumber of the wavelet is critical to the stability of the interpolation results and the de-aliasing performance and that the range should be determined based on Nyquist criteria. In addition, we reduced the computational cost by adopting the inner product of the wavelet and the input data to find the parameters of the wavelet basis function instead of using L-2 norm minimization. Using synthetic data, we illustrate that for aliased data, wavelet-based matching pursuit interpolation yields more stable results than sinusoidal function-based one when we use not only pressure data only but also both pressure and particle velocity together.

  19. Wavelet-based ground vehicle recognition using acoustic signals

    NASA Astrophysics Data System (ADS)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  20. Wavelet-based AR-SVM for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Kim, Yeesock; Chong, Jo Woon; Chon, Ki H.; Kim, JungMi

    2013-01-01

    This paper proposes a novel structural health monitoring framework for damage detection of smart structures. The framework is developed through the integration of the discrete wavelet transform, an autoregressive (AR) model, damage-sensitive features, and a support vector machine (SVM). The steps of the method are the following: (1) the wavelet-based AR (WAR) model estimates vibration signals obtained from both the undamaged and damaged smart structures under a variety of random signals; (2) a new damage-sensitive feature is formulated in terms of the AR parameters estimated from the structural velocity responses; and then (3) the SVM is applied to each group of damaged and undamaged data sets in order to optimally separate them into either damaged or healthy groups. To demonstrate the effectiveness of the proposed structural health monitoring framework, a three-story smart building equipped with a magnetorheological (MR) damper under artificial earthquake signals is studied. It is shown from the simulation that the proposed health monitoring scheme is effective in detecting damage of the smart structures in an efficient way.

  1. Ionospheric electron density profile estimation using commercial AM broadcast signals

    NASA Astrophysics Data System (ADS)

    Yu, De; Ma, Hong; Cheng, Li; Li, Yang; Zhang, Yufeng; Chen, Wenjun

    2015-08-01

    A new method for estimating the bottom electron density profile by using commercial AM broadcast signals as non-cooperative signals is presented in this paper. Without requiring any dedicated transmitters, the required input data are the measured elevation angles of signals transmitted from the known locations of broadcast stations. The input data are inverted for the QPS model parameters depicting the electron density profile of the signal's reflection area by using a probabilistic inversion technique. This method has been validated on synthesized data and used with the real data provided by an HF direction-finding system situated near the city of Wuhan. The estimated parameters obtained by the proposed method have been compared with vertical ionosonde data and have been used to locate the Shijiazhuang broadcast station. The simulation and experimental results indicate that the proposed ionospheric sounding method is feasible for obtaining useful electron density profiles.

  2. Evaluating lidar point densities for effective estimation of aboveground biomass

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  3. Experimental validation of wavelet based solution for dynamic response of railway track subjected to a moving train

    NASA Astrophysics Data System (ADS)

    Koziol, Piotr

    2016-10-01

    New approaches allowing effective analysis of railway structures dynamic behaviour are needed for appropriate modelling and understanding of phenomena associated with train transportation. The literature highlights the fact that nonlinear assumptions are of importance in dynamic analysis of railway tracks. This paper presents wavelet based semi-analytical solution for the infinite Euler-Bernoulli beam resting on a nonlinear foundation and subjected to a set of moving forces, being representation of railway track with moving train, along with its preliminary experimental validation. It is shown that this model, although very simplified, with an assumption of viscous damping of foundation, can be considered as a good enough approximation of realistic structures behaviour. The steady-state response of the beam is obtained by applying the Galilean co-ordinate system and the Adomian's decomposition method combined with coiflet based approximation, leading to analytical estimation of transverse displacements. The applied approach, using parameters taken from real measurements carried out on the Polish Railways network for fast train Pendolino EMU-250, shows ability of the proposed method to analyse parametrically dynamic systems associated with transportation. The obtained results are in accordance with measurement data in wide range of physical parameters, which can be treated as a validation of the developed wavelet based approach. The conducted investigation is supplemented by several numerical examples.

  4. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  5. Estimation of Enceladus Plume Density Using Cassini Flight Data

    NASA Technical Reports Server (NTRS)

    Wang, Eric K.; Lee, Allan Y.

    2011-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.

  6. Quantitative volumetric breast density estimation using phase contrast mammography

    NASA Astrophysics Data System (ADS)

    Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A.; D'Isidoro, Fabio; Stampanoni, Marco

    2015-05-01

    Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p=0.033 ) and AP view (p=0.001 ). A linear regression between the results of the CC view and AP view showed a correlation coefficient γ = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.

  7. Quantitative volumetric breast density estimation using phase contrast mammography.

    PubMed

    Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A; D'Isidoro, Fabio; Stampanoni, Marco

    2015-05-21

    Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p = 0.033) and AP view (p = 0.001). A linear regression between the results of the CC view and AP view showed a correlation coefficient γ = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.

  8. The Effect of Lidar Point Density on LAI Estimation

    NASA Astrophysics Data System (ADS)

    Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

    2013-12-01

    Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful

  9. Can modeling improve estimation of desert tortoise population densities?

    USGS Publications Warehouse

    Nussear, K.E.; Tracy, C.R.

    2007-01-01

    The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

  10. Some Bayesian statistical techniques useful in estimating frequency and density

    USGS Publications Warehouse

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  11. Density estimators in particle hydrodynamics. DTFE versus regular SPH

    NASA Astrophysics Data System (ADS)

    Pelupessy, F. I.; Schaap, W. E.; van de Weygaert, R.

    2003-05-01

    We present the results of a study comparing density maps reconstructed by the Delaunay Tessellation Field Estimator (DTFE) and by regular SPH kernel-based techniques. The density maps are constructed from the outcome of an SPH particle hydrodynamics simulation of a multiphase interstellar medium. The comparison between the two methods clearly demonstrates the superior performance of the DTFE with respect to conventional SPH methods, in particular at locations where SPH appears to fail. Filamentary and sheetlike structures form telling examples. The DTFE is a fully self-adaptive technique for reconstructing continuous density fields from discrete particle distributions, and is based upon the corresponding Delaunay tessellation. Its principal asset is its complete independence of arbitrary smoothing functions and parameters specifying the properties of these. As a result it manages to faithfully reproduce the anisotropies of the local particle distribution and through its adaptive and local nature proves to be optimally suited for uncovering the full structural richness in the density distribution. Through the improvement in local density estimates, calculations invoking the DTFE will yield a much better representation of physical processes which depend on density. This will be crucial in the case of feedback processes, which play a major role in galaxy and star formation. The presented results form an encouraging step towards the application and insertion of the DTFE in astrophysical hydrocodes. We describe an outline for the construction of a particle hydrodynamics code in which the DTFE replaces kernel-based methods. Further discussion addresses the issue and possibilities for a moving grid-based hydrocode invoking the DTFE, and Delaunay tessellations, in an attempt to combine the virtues of the Eulerian and Lagrangian approaches.

  12. Estimating black bear density using DNA data from hair snares

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  13. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  14. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of

  15. A new wavelet-based thin plate element using B-spline wavelet on the interval

    NASA Astrophysics Data System (ADS)

    Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang

    2008-01-01

    By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.

  16. Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

    PubMed

    Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

    2008-10-01

    The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.

  17. A projection and density estimation method for knowledge discovery.

    PubMed

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  18. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  19. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a

  20. Effect of Random Clustering on Surface Damage Density Estimates

    SciTech Connect

    Matthews, M J; Feit, M D

    2007-10-29

    Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

  1. Estimation of Volumetric Breast Density from Digital Mammograms

    NASA Astrophysics Data System (ADS)

    Alonzo-Proulx, Olivier

    Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.

  2. Estimation of topside electron density profile using on-orbit measured GPS and electron density data.

    NASA Astrophysics Data System (ADS)

    Lee, J.

    2015-12-01

    The topside ionophere have lacks of information about plasma, but it is important for human beings and scientific applicaiton. We establish an estimation method for electron density profile using Langmuir Probe and GPS data of CHAMP satellite and have comparision the method results with other satellites measurements. In order to develop the model, hydrostatic mapping function, vertical scale height, and vertical TEC(Total Electron Contents) are used for calculations. The electron density and GPS data with hydrostatic mapping function give the vertical TEC and after some algebra using exponential model of density profile give the vertical scale height of ionosphere. The scale height have about 10^2~10^3 km order of magnitude so it can be used exponential model again since the altitude of CHAMP. Therefore, apply the scale height to exponoential model we can get the topside electron density profile. The result of the density profile model can be compared with other satellite data as STSAT-1, ROCSAT, DMSP which is measured the electron density in similar Local Time, Latitude, Longitude but above the CHAMP. This comparison shows the method is accecptable and it can be applied to other reseach for topside ionosphere.

  3. Multivariate mixtures of Erlangs for density estimation under censoring.

    PubMed

    Verbelen, Roel; Antonio, Katrien; Claeskens, Gerda

    2016-07-01

    Multivariate mixtures of Erlang distributions form a versatile, yet analytically tractable, class of distributions making them suitable for multivariate density estimation. We present a flexible and effective fitting procedure for multivariate mixtures of Erlangs, which iteratively uses the EM algorithm, by introducing a computationally efficient initialization and adjustment strategy for the shape parameter vectors. We furthermore extend the EM algorithm for multivariate mixtures of Erlangs to be able to deal with randomly censored and fixed truncated data. The effectiveness of the proposed algorithm is demonstrated on simulated as well as real data sets.

  4. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  5. A wavelet-based intermittency detection technique from PIV investigations in transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Simoni, Daniele; Lengani, Davide; Guida, Roberto

    2016-09-01

    The transition process of the boundary layer growing over a flat plate with pressure gradient simulating the suction side of a low-pressure turbine blade and elevated free-stream turbulence intensity level has been analyzed by means of PIV and hot-wire measurements. A detailed view of the instantaneous flow field in the wall-normal plane highlights the physics characterizing the complex process leading to the formation of large-scale coherent structures during breaking down of the ordered motion of the flow, thus generating randomized oscillations (i.e., turbulent spots). This analysis gives the basis for the development of a new procedure aimed at determining the intermittency function describing (statistically) the transition process. To this end, a wavelet-based method has been employed for the identification of the large-scale structures created during the transition process. Successively, a probability density function of these events has been defined so that an intermittency function is deduced. This latter strictly corresponds to the intermittency function of the transitional flow computed trough a classic procedure based on hot-wire data. The agreement between the two procedures in the intermittency shape and spot production rate proves the capability of the method in providing the statistical representation of the transition process. The main advantages of the procedure here proposed concern with its applicability to PIV data; it does not require a threshold level to discriminate first- and/or second-order time-derivative of hot-wire time traces (that makes the method not influenced by the operator); and it provides a clear evidence of the connection between the flow physics and the statistical representation of transition based on theory of turbulent spot propagation.

  6. Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds

    NASA Astrophysics Data System (ADS)

    Arshakian, Tigran G.; Ossenkopf, Volker

    2016-01-01

    Aims: We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well-defined spatial properties, we calibrate the method and test its limitations to apply it ultimately to a set of observed maps. Methods: We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. Results: We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behaviour without prominent scales. Observational noise and a finite map size limit the scales on which the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from observational noise is negligible for signal-to-noise ratios ≳5. We propose an approach for the identification of correlated structures in the maps, which allows us to localize individual correlated structures and recognize their shapes and suggest a recipe for recovering enhanced scales in self-similar structures. The application of the WWCC to the observed line maps of the giant molecular cloud G 333 allows us to add specific scale information to the results obtained earlier using the principal component analysis. The WWCC confirms the chemical and excitation similarity of 13CO and C18O on all scales, but shows a deviation of HCN at scales of up to 7 pc. This can be interpreted as a chemical transition scale. The largest structures also show a systematic offset along the filament, probably due to a large-scale density gradient. Conclusions: The WWCC can compare correlated structures in different maps of molecular clouds identifying scales that represent structural changes, such as chemical and phase transitions

  7. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.

  8. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  9. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  10. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGESBeta

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  11. Estimating tropical-forest density profiles from multibaseline interferometric SAR

    NASA Technical Reports Server (NTRS)

    Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

    2006-01-01

    Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

  12. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  13. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    PubMed

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates. PMID:26729880

  14. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    PubMed

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates.

  15. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    NASA Technical Reports Server (NTRS)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  16. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    NASA Astrophysics Data System (ADS)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  17. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGESBeta

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  18. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy

    PubMed Central

    Chang, Ching-Wei; Mycek, Mary-Ann

    2014-01-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891

  19. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  20. Recognition of short-term changes in physiological signals with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Sindeeva, Olga A.; Sindeev, Sergey S.; Pavlova, Olga N.; Rybalova, Elena V.; Semyachkina-Glushkovskaya, Oxana V.

    2016-03-01

    In this paper we address the problem of revealing and recognition transitions between distinct physiological states using quite short fragments of experimental recordings. With the wavelet-based multifractal analysis we characterize changes of complexity and correlation properties in the stress-induced dynamics of arterial blood pressure in rats. We propose an approach for association revealed changes with distinct physiological regulatory mechanisms and for quantifying the influence of each mechanism.

  1. Wavelet-Based Real-Time Diagnosis of Complex Systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Mackey, Ryan

    2003-01-01

    A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

  2. Usefulness of wavelet-based features as global descriptors of VHR satellite images

    NASA Astrophysics Data System (ADS)

    Pyka, Krystian; Drzewiecki, Wojciech; Bernat, Katarzyna; Wawrzaszek, Anna; Krupiński, Michal

    2014-10-01

    In this paper we present the results of research carried out to assess the usefulness of wavelet-based measures of image texture for classification of panchromatic VHR satellite image content. The study is based on images obtained from EROS-A satellite. Wavelet-based features are calculated according to two approaches. In first one the wavelet energy is calculated for each components from every level of decomposition using Haar wavelet. In second one the variance and kurtosis are calculated as mean values of detail components with filters belonging to the D, LA, MB groups of various lengths. The results indicate that both approaches are useful and complement one another. Among the most useful wavelet-based features are present not only those calculated with short or long filters, but also with the filters of intermediate length. Usage of filters of different type and length as well as different statistical parameters (variance, kurtosis) calculated as means for each decomposition level improved the discriminative properties of the feature vector consisted initially of wavelet energies of each component.

  3. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  4. A generalized model for estimating the energy density of invertebrates

    USGS Publications Warehouse

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p < 0.0001), where ED (as J/g wet mass) was estimated from pDM as ED  =  22,960pDM − 174.2. Model evaluation showed that nearly all (98.8%) of the variability between observed and predicted values for invertebrate ED could be attributed to residual error in the model. Regression of observed on predicted values revealed that the 97.5% joint confidence region included the intercept of 0 (−103.0 ± 707.9) and slope of 1 (1.01 ± 0.12). Use of this model requires that only dry and wet mass measurements be obtained, resulting in significant time, sample size, and cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  5. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics

    PubMed Central

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

    2010-01-01

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false

  6. Multiscale analysis of genome-wide replication timing profiles using a wavelet-based signal-processing algorithm.

    PubMed

    Audit, Benjamin; Baker, Antoine; Chen, Chun-Long; Rappailles, Aurélien; Guilbaud, Guillaume; Julienne, Hanna; Goldar, Arach; d'Aubenton-Carafa, Yves; Hyrien, Olivier; Thermes, Claude; Arneodo, Alain

    2013-01-01

    In this protocol, we describe the use of the LastWave open-source signal-processing command language (http://perso.ens-lyon.fr/benjamin.audit/LastWave/) for analyzing cellular DNA replication timing profiles. LastWave makes use of a multiscale, wavelet-based signal-processing algorithm that is based on a rigorous theoretical analysis linking timing profiles to fundamental features of the cell's DNA replication program, such as the average replication fork polarity and the difference between replication origin density and termination site density. We describe the flow of signal-processing operations to obtain interactive visual analyses of DNA replication timing profiles. We focus on procedures for exploring the space-scale map of apparent replication speeds to detect peaks in the replication timing profiles that represent preferential replication initiation zones, and for delimiting U-shaped domains in the replication timing profile. In comparison with the generally adopted approach that involves genome segmentation into regions of constant timing separated by timing transition regions, the present protocol enables the recognition of more complex patterns of the spatio-temporal replication program and has a broader range of applications. Completing the full procedure should not take more than 1 h, although learning the basics of the program can take a few hours and achieving full proficiency in the use of the software may take days.

  7. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.

  8. Atmospheric turbulence mitigation using complex wavelet-based fusion.

    PubMed

    Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R

    2013-06-01

    Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios.

  9. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  10. Wavelet-based coherence measures of global seismic noise properties

    NASA Astrophysics Data System (ADS)

    Lyubushin, A. A.

    2015-04-01

    The coherent behavior of four parameters characterizing the global field of low-frequency (periods from 2 to 500 min) seismic noise is studied. These parameters include generalized Hurst exponent, multifractal singularity spectrum support width, the normalized entropy of variance, and kurtosis. The analysis is based on the data from 229 broadband stations of GSN, GEOSCOPE, and GEOFON networks for a 17-year period from the beginning of 1997 to the end of 2013. The entire set of stations is subdivided into eight groups, which, taken together, provide full coverage of the Earth. The daily median values of the studied noise parameters are calculated in each group. This procedure yields four 8-dimensional time series with a time step of 1 day with a length of 6209 samples in each scalar component. For each of the four 8-dimensional time series, a multiple correlation measure is estimated, which is based on computing robust canonical correlations for the Haar wavelet coefficients at the first detail level within a moving time window of the length 365 days. These correlation measures for each noise property demonstrate essential increasing starting from 2007 to 2008 which was continued till the end of 2013. Taking into account a well-known phenomenon of noise correlation increasing before catastrophes, this increasing of seismic noise synchronization is interpreted as indicators of the strongest (magnitudes not less than 8.5) earthquakes activation which is observed starting from the Sumatra mega-earthquake of 26 Dec 2004. This synchronization continues growing up to the end of the studied period (2013), which can be interpreted as a probable precursor of the further increase in the intensity of the strongest earthquakes all over the world.

  11. Etalon-photometric method for estimation of tissues density at x-ray images

    NASA Astrophysics Data System (ADS)

    Buldakov, Nicolay S.; Buldakova, Tatyana I.; Suyatinov, Sergey I.

    2016-04-01

    The etalon-photometric method for quantitative estimation of physical density of pathological entities is considered. The method consists in using etalon during the registration and estimation of photometric characteristics of objects. The algorithm for estimating of physical density at X-ray images is offered.

  12. On the estimation of dynamic mass density of random composites.

    PubMed

    Jin, Congrui

    2012-08-01

    The dynamic effective mass density and bulk modulus of an inhomogeneous medium at low frequency limit are discussed. Random configurations in a variety of two-dimensional physical contexts are considered. In each case, effective dynamic mass density and bulk modulus are calculated based on eigenmode matching theory. The results agree with those provided by Martin et al. [J. Acoust. Soc. Am. 128, 571-577 (2010)] obtained from effective wavenumber method. PMID:22894183

  13. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    PubMed

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology. PMID:26236833

  14. Joint discrepancy evaluation of an existing steel bridge using time-frequency and wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Walia, Suresh Kumar; Patel, Raj Kumar; Vinayak, Hemant Kumar; Parti, Raman

    2013-12-01

    The objective of this study is to bring out the errors introduced during construction which are overlooked during the physical verification of the bridge. Such errors can be pointed out if the symmetry of the structure is challenged. This paper thus presents the study of downstream and upstream truss of newly constructed steel bridge using time-frequency and wavelet-based approach. The variation in the behavior of truss joints of bridge with variation in the vehicle speed has been worked out to determine their flexibility. The testing on the steel bridge was carried out with the same instrument setup on both the upstream and downstream trusses of the bridge at two different speeds with the same moving vehicle. The nodal flexibility investigation is carried out using power spectral density, short-time Fourier transform, and wavelet packet transform with respect to both the trusses and speed. The results obtained have shown that the joints of both upstream and downstream trusses of the bridge behave in a different manner even if designed for the same loading due to constructional variations and vehicle movement, in spite of the fact that the analytical models present a simplistic model for analysis and design. The difficulty of modal parameter extraction of the particular bridge under study increased with the increase in speed due to decreased excitation time.

  15. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  16. EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS1

    PubMed Central

    Mukherjee, Gourab; Johnstone, Iain M.

    2015-01-01

    We consider estimating the predictive density under Kullback–Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates. PMID:26448678

  17. On the analysis of wavelet-based approaches for print mottle artifacts

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed H.; Cooper, Brian E.

    2014-01-01

    Print mottle is one of several attributes described in ISO/IEC DTS 24790, a draft technical specification for the measurement of image quality for monochrome printed output. It defines mottle as aperiodic fluctuations of lightness less than about 0.4 cycles per millimeter, a definition inherited from the latest official standard on printed image quality, ISO/IEC 13660. In a previous publication, we introduced a modification to the ISO/IEC 13660 mottle measurement algorithm that includes a band-pass, wavelet-based, filtering step to limit the contribution of high-frequency fluctuations including those introduced by print grain artifacts. This modification has improved the algorithm's correlation with the subjective evaluation of experts who rated the severity of printed mottle artifacts. Seeking to improve upon the mottle algorithm in ISO/IEC 13660, the ISO 24790 committee evaluated several mottle metrics. This led to the selection of the above wavelet-based approach as the top candidate algorithm for inclusion in a future ISO/IEC standard. Recent experimental results from the ISO committee showed higher correlation between the wavelet-based approach and the subjective evaluation conducted by the ISO committee members based upon 25 samples covering a variety of printed mottle artifacts. In addition, we introduce an alternative approach for measuring mottle defects based on spatial frequency analysis of wavelet- filtered images. Our goal is to establish a link between the spatial-based mottle (ISO/IEC DTS 24790) approach and its equivalent frequency-based one in light of Parseval's theorem. Our experimental results showed a high correlation between the spatial and frequency based approaches.

  18. Comparisom of Wavelet-Based and Hht-Based Feature Extraction Methods for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, X.-M.; Hsu, P.-H.

    2012-07-01

    Hyperspectral images, which contain rich and fine spectral information, can be used to identify surface objects and improve land use/cover classification accuracy. Due to the property of high dimensionality of hyperspectral data, traditional statistics-based classifiers cannot be directly used on such images with limited training samples. This problem is referred as "curse of dimensionality". The commonly used method to solve this problem is dimensionality reduction, and feature extraction is used to reduce the dimensionality of hyperspectral images more frequently. There are two types of feature extraction methods. The first type is based on statistical property of data. The other type is based on time-frequency analysis. In this study, the time-frequency analysis methods are used to extract the features for hyperspectral image classification. Firstly, it has been proven that wavelet-based feature extraction provide an effective tool for spectral feature extraction. On the other hand, Hilbert-Huang transform (HHT), a relative new time-frequency analysis tool, has been widely used in nonlinear and nonstationary data analysis. In this study, wavelet transform and HHT are implemented on the hyperspectral data for physical spectral analysis. Therefore, we can get a small number of salient features, reduce the dimensionality of hyperspectral images and keep the accuracy of classification results. An AVIRIS data set is used to test the performance of the proposed HHT-based feature extraction methods; then, the results are compared with wavelet-based feature extraction. According to the experiment results, HHT-based feature extraction methods are effective tools and the results are similar with wavelet-based feature extraction methods.

  19. Wavelet-based Poisson solver for use in particle-in-cell simulations.

    PubMed

    Terzić, Balsa; Pogorelov, Ilya V

    2005-06-01

    We report on a successful implementation of a wavelet-based Poisson solver for use in three-dimensional particle-in-cell simulations. Our method harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and additional compression of relevant data sets. We present and discuss preliminary results relating to the application of the new solver to test problems in accelerator physics and astrophysics. PMID:15980304

  20. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  1. Automatic quantitative analysis of ultrasound tongue contours via wavelet-based functional mixed models.

    PubMed

    Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S

    2015-02-01

    This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms.

  2. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  3. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  4. Brain tissue compartment density estimated using diffusion-weighted MRI yields tissue parameters consistent with histology.

    PubMed

    Sepehrband, Farshid; Clark, Kristi A; Ullmann, Jeremy F P; Kurniawan, Nyoman D; Leanage, Gayeshika; Reutens, David C; Yang, Zhengyi

    2015-09-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intracellular and intraneurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different subregions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (diffusion MRI: 42 ± 6%, 36 ± 4%, and 43 ± 5%; electron microscopy: 41 ± 10%, 36 ± 8%, and 44 ± 12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers.

  5. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  6. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  7. Estimated global nitrogen deposition using NO2 column density

    USGS Publications Warehouse

    Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

    2013-01-01

    Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m−2 yr−1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m−2 yr−1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

  8. Wavelet-based ECG data compression system with linear quality control scheme.

    PubMed

    Ku, Cheng-Tung; Hung, King-Chu; Wu, Tsung-Ching; Wang, Huan-Sheng

    2010-06-01

    Maintaining reconstructed signals at a desired level of quality is crucial for lossy ECG data compression. Wavelet-based approaches using a recursive decomposition process are unsuitable for real-time ECG signal recoding and commonly obtain a nonlinear compression performance with distortion sensitive to quantization error. The sensitive response is caused without compromising the influences of word-length-growth (WLG) effect and unfavorable for the reconstruction quality control of ECG data compression. In this paper, the 1-D reversible round-off nonrecursive discrete periodic wavelet transform is applied to overcome the WLG magnification effect in terms of the mechanisms of error propagation resistance and significant normalization of octave coefficients. The two mechanisms enable the design of a multivariable quantization scheme that can obtain a compression performance with the approximate characteristics of linear distortion. The quantization scheme can be controlled with a single control variable. Based on the linear compression performance, a linear quantization scale prediction model is presented for guaranteeing reconstruction quality. Following the use of the MIT-BIH arrhythmia database, the experimental results show that the proposed system, with lower computational complexity, can obtain much better reconstruction quality control than other wavelet-based methods.

  9. Wavelet-based neural network analysis of internal carotid arterial Doppler signals.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2006-06-01

    In this study, internal carotid arterial Doppler signals recorded from 130 subjects, where 45 of them suffered from internal carotid artery stenosis, 44 of them suffered from internal carotid artery occlusion and the rest of them were healthy subjects, were classified using wavelet-based neural network. Wavelet-based neural network model, employing the multilayer perceptron, was used for analysis of the internal carotid arterial Doppler signals. Multi-layer perceptron neural network (MLPNN) trained with the Levenberg-Marquardt algorithm was used to detect stenosis and occlusion in internal carotid arteries. In order to determine the MLPNN inputs, spectral analysis of the internal carotid arterial Doppler signals was performed using wavelet transform (WT). The MLPNN was trained, cross validated, and tested with training, cross validation, and testing sets, respectively. All these data sets were obtained from internal carotid arteries of healthy subjects, subjects suffering from internal carotid artery stenosis and occlusion. The correct classification rate was 96% for healthy subjects, 96.15% for subjects having internal carotid artery stenosis and 96.30% for subjects having internal carotid artery occlusion. The classification results showed that the MLPNN trained with the Levenberg-Marquardt algorithm was effective to detect internal carotid artery stenosis and occlusion. PMID:16848135

  10. Exploring Weak and Overlapped Returns of a LIDAR Waveform with a Wavelet-Based Echo Detector

    NASA Astrophysics Data System (ADS)

    Wang, C. K.

    2012-08-01

    Full waveform data recording the reflected laser signal from ground objects have been provided by some commercial airborne LIDAR systems in the last few years. Waveform data enable users to explore more information and characteristics of the earth surface than conventional LIDAR point cloud. An important application is to extract extra point clouds from waveform data in addition to the point cloud generated by the online process of echo detection. Some difficult-to-detect points, which may be important to topographic mapping, can be rediscovered from waveform data. The motivation of this study is to explore weak and overlapped returns of a waveform. This paper presents a wavelet-based echo detection algorithm, which is compared with the zero-crossing detection method for evaluation. Some simulated waveforms deteriorated with different noises are made to test the limitations of the detector. The experimental results show that the wavelet-based detector outperformed the zero-crossing detector in both difficult-to-detect cases. The detector is also applied to a real waveform dataset. In addition to the total number of echoes provided by the instrument, the detector found 18% more of echoes. The proposed detector is significant in finding weak and overlapped returns from waveforms.

  11. Analysis of damped tissue vibrations in time-frequency space: a wavelet-based approach.

    PubMed

    Enders, Hendrik; von Tscharner, Vinzenz; Nigg, Benno M

    2012-11-15

    There is evidence that vibrations of soft tissue compartments are not appropriately described by a single sinusoidal oscillation for certain types of locomotion such as running or sprinting. This paper discusses a new method to quantify damping of superimposed oscillations using a wavelet-based time-frequency approach. This wavelet-based method was applied to experimental data in order to analyze the decay of the overall power of vibration signals over time. Eight healthy subjects performed sprinting trials on a 30 m runway on a hard surface and a soft surface. Soft tissue vibrations were quantified from the tissue overlaying the muscle belly of the medial gastrocnemius muscle. The new methodology determines damping coefficients with an average error of 2.2% based on a wavelet scaling factor of 0.7. This was sufficient to detect differences in soft tissue compartment damping between the hard and soft surface. On average, the hard surface elicited a 7.02 s(-1) lower damping coefficient than the soft surface (p<0.05). A power spectral analysis of the muscular vibrations occurring during sprinting confirmed that vibrations during dynamic movements cannot be represented by a single sinusoidal function. Compared to the traditional sinusoidal approach, this newly developed method can quantify vibration damping for systems with multiple vibration modes that interfere with one another. This new time-frequency analysis may be more appropriate when an acceleration trace does not follow a sinusoidal function, as is the case with multiple forms of human locomotion. PMID:22995145

  12. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Astrophysics Data System (ADS)

    Godines, Cody R.; Manteufel, Randall D.

    2002-12-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation

  13. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation

  14. RADIATION PRESSURE DETECTION AND DENSITY ESTIMATE FOR 2011 MD

    SciTech Connect

    Micheli, Marco; Tholen, David J.; Elliott, Garrett T. E-mail: tholen@ifa.hawaii.edu

    2014-06-10

    We present our astrometric observations of the small near-Earth object 2011 MD (H ∼ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5σ. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of ρ=(640±330)kg m{sup −3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

  15. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Robbins, C.S.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  16. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia

    PubMed Central

    Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  17. Estimating insect flight densities from attractive trap catches and flight height distributions.

    PubMed

    Byers, John A

    2012-05-01

    Methods and equations have not been developed previously to estimate insect flight densities, a key factor in decisions regarding trap and lure deployment in programs of monitoring, mass trapping, and mating disruption with semiochemicals. An equation to estimate densities of flying insects per hectare is presented that uses the standard deviation (SD) of the vertical flight distribution, trapping time, the trap's spherical effective radius (ER), catch at the mean flight height (as estimated from a best-fitting normal distribution with SD), and an estimated average flight speed. Data from previous reports were used to estimate flight densities with the equations. The same equations can use traps with pheromone lures or attractive colors with a measured effective attraction radius (EAR) instead of the ER. In practice, EAR is more useful than ER for flight density calculations since attractive traps catch higher numbers of insects and thus can measure lower populations more readily. Computer simulations in three dimensions with varying numbers of insects (density) and varying EAR were used to validate the equations for density estimates of insects in the field. Few studies have provided data to obtain EAR, SD, speed, and trapping time to estimate flight densities per hectare. However, the necessary parameters can be measured more precisely in future studies.

  18. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  19. Wavelet-based Poisson Solver for use in Particle-In-CellSimulations

    SciTech Connect

    Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

    2005-05-13

    We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.

  20. Wavelet-based correlations of impedance cardiography signals and heart rate variability

    NASA Astrophysics Data System (ADS)

    Podtaev, Sergey; Dumler, Andrew; Stepanov, Rodion; Frick, Peter; Tziberkin, Kirill

    2010-04-01

    The wavelet-based correlation analysis is employed to study impedance cardiography signals (variation in the impedance of the thorax z(t) and time derivative of the thoracic impedance (- dz/dt)) and heart rate variability (HRV). A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. The modulus of wavelet-correlation function shows the level of correlation, and the phase indicates the mean phase shift of oscillations at the given scale (frequency). Significant correlations essentially exceeding the values obtained for noise signals are defined within two spectral ranges, which correspond to respiratory activity (0.14-0.5 Hz), endothelial related metabolic activity and neuroendocrine rhythms (0.0095-0.02 Hz). Probably, the phase shift of oscillations in all frequency ranges is related to the peculiarities of parasympathetic and neuro-humoral regulation of a cardiovascular system.

  1. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature.

  2. FAST TRACK COMMUNICATION: From cardinal spline wavelet bases to highly coherent dictionaries

    NASA Astrophysics Data System (ADS)

    Andrle, Miroslav; Rebollo-Neira, Laura

    2008-05-01

    Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation.

  3. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  4. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  5. Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis

    PubMed Central

    Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

    2014-01-01

    To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

  6. Neuromagnetic correlates of developmental changes in endogenous high-frequency brain oscillations in children: a wavelet-based beamformer study.

    PubMed

    Xiang, Jing; Liu, Yang; Wang, Yingying; Kotecha, Rupesh; Kirtman, Elijah G; Chen, Yangmei; Huo, Xiaolin; Fujiwara, Hisako; Hemasilpin, Nat; DeGrauw, Ton; Rose, Douglas

    2009-06-01

    Recent studies have found that the brain generates very fast oscillations. The objective of the present study was to investigate the spectral, spatial and coherent features of high-frequency brain oscillations in the developing brain. Sixty healthy children and 20 healthy adults were studied using a 275-channel magnetoencephalography (MEG) system. MEG data were digitized at 12,000 Hz. The frequency characteristics of neuromagnetic signals in 0.5-2000 Hz were quantitatively determined with Morlet wavelet transform. The magnetic sources were volumetrically estimated with wavelet-based beamformer at 2.5 mm resolution. The neural networks of endogenous brain oscillations were analyzed with coherent imaging. Neuromagnetic activities in 8-12 Hz and 800-900 Hz were found to be the most reliable frequency bands in healthy children. The neuromagnetic signals were localized in the occipital, temporal and frontal cortices. The activities in the occipital and temporal cortices were strongly correlated in 8-12 Hz but not in 800-900 Hz. In comparison to adults, children had brain oscillations in intermingled frequency bands. Developmental changes in children were identified for both low- and high-frequency brain activities. The results of the present study suggest that the development of the brain is associated with spatial and coherent changes of endogenous brain activities in both low- and high-frequency ranges. Analysis of high-frequency neuromagnetic oscillation may provide novel insights into cerebral mechanisms of brain function. The noninvasive measurement of neuromagnetic brain oscillations in the developing brain may open a new window for analysis of brain function. PMID:19362072

  7. In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...

  8. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  9. Variational image decomposition for estimation of fringe orientation and density from electronic speckle pattern interferometry fringe patterns with greatly variable density

    NASA Astrophysics Data System (ADS)

    Chen, Xia; Tang, Chen; Li, Biyuan; Su, Yonggang

    2016-11-01

    Fringe orientation and density are important properties of fringes. The estimation of fringe orientation and density from electronic speckle pattern interferometry (ESPI) fringe patterns with greatly variable density is still a challenging problem faced in this area. We propose an effective method based on variational image decomposition to estimate fringe orientation and density simultaneously. The BL - Hilbert model is proposed to successfully decompose an ESPI fringe pattern with greatly variable density into two images: one only includes low density fringes and the other high density fringes. The density of the two decomposed images are uniform. We estimate the orientation and density of the two decomposed images by existing methods. The whole fringe orientation and density can be obtained by combining the corresponding results of the two decomposed images. We evaluate the performance of our method via application to the computer-simulated and experimentally obtained ESPI fringe patterns with greatly variable density and comparison with the widely used three methods.

  10. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  11. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  12. A bound for the smoothing parameter in certain well-known nonparametric density estimators

    NASA Technical Reports Server (NTRS)

    Terrell, G. R.

    1980-01-01

    Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

  13. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  14. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  15. Statistical Analysis of Photopyroelectric Signals using Histogram and Kernel Density Estimation for differentiation of Maize Seeds

    NASA Astrophysics Data System (ADS)

    Rojas-Lima, J. E.; Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2016-09-01

    Considering the necessity of photothermal alternative approaches for characterizing nonhomogeneous materials like maize seeds, the objective of this research work was to analyze statistically the amplitude variations of photopyroelectric signals, by means of nonparametric techniques such as the histogram and the kernel density estimator, and the probability density function of the amplitude variations of two genotypes of maize seeds with different pigmentations and structural components: crystalline and floury. To determine if the probability density function had a known parametric form, the histogram was determined which did not present a known parametric form, so the kernel density estimator using the Gaussian kernel, with an efficiency of 95 % in density estimation, was used to obtain the probability density function. The results obtained indicated that maize seeds could be differentiated in terms of the statistical values for floury and crystalline seeds such as the mean (93.11, 159.21), variance (1.64× 103, 1.48× 103), and standard deviation (40.54, 38.47) obtained from the amplitude variations of photopyroelectric signals in the case of the histogram approach. For the case of the kernel density estimator, seeds can be differentiated in terms of kernel bandwidth or smoothing constant h of 9.85 and 6.09 for floury and crystalline seeds, respectively.

  16. Spatial-temporal filtering method based on kernel density estimation in suppressing background clutter

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Liu, Yinghui; Gao, Kun; Shu, Yuwen; Ni, Guoqiang

    2014-11-01

    A temporal-spatial filtering algorithm based on kernel density estimation structure is presented for background suppression in this paper. The algorithm can be divided into spatial filtering and temporal filtering. Smoothing process is applied to the background of an infrared image sequence by using the kernel density estimation algorithm in spatial filtering. The probability density of the image gray values after spatial filtering is calculated with the kernel density estimation algorithm in temporal filtering. The background residual and blind pixels are picked out based on their gray values, and are further filtered. The algorithm is validated with a real infrared image sequence. The image sequence is processed by using Fuller kernel filter, Uniform kernel filter and high-pass filter. Quantitatively analysis shows that the temporal-spatial filtering algorithm based on the nonparametric method is a satisfactory way to suppress background clutter in infrared images. The SNR is significantly improved as well.

  17. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect

    PubMed Central

    Ku, Bon Ki; Evans, Douglas E.

    2015-01-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of

  18. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. PMID:21682386

  19. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data.

  20. Estimation of mechanical properties of panels based on modal density and mean mobility measurements

    NASA Astrophysics Data System (ADS)

    Elie, Benjamin; Gautier, François; David, Bertrand

    2013-11-01

    The mechanical characteristics of wood panels used by instrument makers are related to numerous factors, including the nature of the wood or characteristic of the wood sample (direction of fibers, micro-structure nature). This leads to variations in Young's modulus, the mass density, and the damping coefficients. Existing methods for estimating these parameters are not suitable for instrument makers, mainly because of the need of expensive experimental setups, or complicated protocols, which are not adapted to a daily practice in a workshop. In this paper, a method for estimating Young's modulus, the mass density, and the modal loss factors of flat panels, requiring a few measurement points and an affordable experimental setup, is presented. It is based on the estimation of two characteristic quantities: the modal density and the mean mobility. The modal density is computed from the values of the modal frequencies estimated by the subspace method ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques), associated with the signal enumeration technique ESTER (ESTimation of ERror). This modal identification technique is proved to be robust in the low- and the mid-frequency domains, i.e. when the modal overlap factor does not exceed 1. The estimation of the modal parameters also enables the computation of the modal loss factor in the low- and the mid-frequency domains. An experimental fit with the theoretical expressions for the modal density and the mean mobility enables an accurate estimation of Young's modulus and the mass density of flat panels. A numerical and an experimental study show that the method is robust, and that it requires solely a few measurement points.

  1. Evaluating Changes And Estimating Seasonal Precipitation For Colorado River Basins Using Nonparametric Density Estimation

    NASA Astrophysics Data System (ADS)

    Kalra, A.; Ahmad, S.; Stephen, H.

    2009-12-01

    Evaluating the hydrologic impacts of climate variability due to changes in precipitation has been an important and challenging task in the field of hydrology. This requires estimation of rainfall, preserving its spatial and temporal variability. The current research focuses on 1) analyzing changes (trend/step) in seasonal precipitation and 2) simulating seasonal precipitation using k-nearest neighbor (k-nn) non-parametric technique for 29 climate divisions covering the entire Colorado River Basin. The current research analyzes water year precipitation data ranging from 1900 to 2008 subdivided into four seasons i.e. autumn (October-December), winter (January-March), spring (April-June), and summer (July-September). Two statistical tests i.e., Mann Kendal and Spearman’s Rho are used to evaluate trend changes and Rank Sum test is used to identify the step change in seasonal precipitation for the selected climate divisions. The results show that changes are mostly during winter season. Eleven divisions show increase in precipitation, 6 divisions show decrease and the remaining 12 show no change in the precipitation for the period of record. A total of eight climate divisions observed changes during autumn season precipitation, with four climate divisions showing increasing and remaining four showing decreasing changes. Decreasing precipitation changes are observed for 6 divisions during spring season. In summer season, three climate divisions show increase and one division showed decrease in precipitation. The increasing precipitation changes during winter season are attributed to gradual step change, whereas the decreasing changes are due to trend changes. The decreasing precipitation changes in spring season occurred due to trend changes. The summer season changes occurred due to a gradual step change. During autumn season six divisions showed changes (3 increasing and 3 decreasing) due to a gradual step change and the remaining two divisions observed changes due

  2. Estimating population density and connectivity of American mink using spatial capture-recapture

    USGS Publications Warehouse

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  3. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  4. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species. PMID:27509753

  5. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  6. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  7. Variability of dental cone beam CT grey values for density estimations

    PubMed Central

    Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K

    2013-01-01

    Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537

  8. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  9. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  10. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  11. Some recommendations for an accurate estimation of Lanice conchilega density based on tube counts

    NASA Astrophysics Data System (ADS)

    van Hoey, Gert; Vincx, Magda; Degraer, Steven

    2006-12-01

    The tube building polychaete Lanice conchilega is a common and ecologically important species in intertidal and shallow subtidal sands. It builds a characteristic tube with ragged fringes and can retract rapidly into its tube to depths of more than 20 cm. Therefore, it is very difficult to sample L. conchilega individuals, especially with a Van Veen grab. Consequently, many studies have used tube counts as estimates of real densities. This study reports on some aspects to be considered when using tube counts as a density estimate of L. conchilega, based on intertidal and subtidal samples. Due to its accuracy and independence of sampling depth, the tube method is considered the prime method to estimate the density of L. conchilega. However, caution is needed when analyzing samples with fragile young individuals and samples from areas where temporary physical disturbance is likely to occur.

  12. Effects of tissue heterogeneity on the optical estimate of breast density

    PubMed Central

    Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2012-01-01

    Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635–1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

  13. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  14. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level. PMID:26915200

  15. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  16. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651

  17. Estimation of stratospheric-mesospheric density fields from satellite radiance data

    NASA Technical Reports Server (NTRS)

    Quiroz, R. S.

    1974-01-01

    Description of a method for deriving horizontal density fields at altitudes above 30 km directly from satellite radiation measurements. The method is applicable to radiation measurements from any instrument with suitable transmittance weighting functions. Data such as those acquired by the Satellite Infrared Spectrometers on satellites Nimbus 3 and 4 are employed for demonstrating the use of the method for estimating stratospheric-mesospheric density fields.

  18. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  19. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  20. Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

    NASA Astrophysics Data System (ADS)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

  1. Estimating the amount and distribution of radon flux density from the soil surface in China.

    PubMed

    Zhuo, Weihai; Guo, Qiuju; Chen, Bo; Cheng, Guan

    2008-07-01

    Based on an idealized model, both the annual and the seasonal radon ((222)Rn) flux densities from the soil surface at 1099 sites in China were estimated by linking a database of soil (226)Ra content and a global ecosystems database. Digital maps of the (222)Rn flux density in China were constructed in a spatial resolution of 25 km x 25 km by interpolation among the estimated data. An area-weighted annual average (222)Rn flux density from the soil surface across China was estimated to be 29.7+/-9.4 mBq m(-2)s(-1). Both regional and seasonal variations in the (222)Rn flux densities are significant in China. Annual average flux densities in the southeastern and northwestern China are generally higher than those in other regions of China, because of high soil (226)Ra content in the southeastern area and high soil aridity in the northwestern one. The seasonal average flux density is generally higher in summer/spring than winter, since relatively higher soil temperature and lower soil water saturation in summer/spring than other seasons are common in China.

  2. A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder

    NASA Astrophysics Data System (ADS)

    Schroth-Miller, Madeline L.

    Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.

  3. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.

    2009-01-01

    1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

  4. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    PubMed

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.

  5. Productivity and population density estimates of the dengue vector mosquito Aedes aegypti (Stegomyia aegypti) in Australia.

    PubMed

    Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

    2013-09-01

    New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities.

  6. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  7. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    PubMed

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  8. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  9. Estimating cetacean population density using fixed passive acoustic sensors: an example with Blainville's beaked whales.

    PubMed

    Marques, Tiago A; Thomas, Len; Ward, Jessica; DiMarzio, Nancy; Tyack, Peter L

    2009-04-01

    Methods are developed for estimating the size/density of cetacean populations using data from a set of fixed passive acoustic sensors. The methods convert the number of detected acoustic cues into animal density by accounting for (i) the probability of detecting cues, (ii) the rate at which animals produce cues, and (iii) the proportion of false positive detections. Additional information is often required for estimation of these quantities, for example, from an acoustic tag applied to a sample of animals. Methods are illustrated with a case study: estimation of Blainville's beaked whale density over a 6 day period in spring 2005, using an 82 hydrophone wide-baseline array located in the Tongue of the Ocean, Bahamas. To estimate the required quantities, additional data are used from digital acoustic tags, attached to five whales over 21 deep dives, where cues recorded on some of the dives are associated with those received on the fixed hydrophones. Estimated density was 25.3 or 22.5 animals/1000 km(2), depending on assumptions about false positive detections, with 95% confidence intervals 17.3-36.9 and 15.4-32.9. These methods are potentially applicable to a wide variety of marine and terrestrial species that are hard to survey using conventional visual methods.

  10. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid

    PubMed Central

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  11. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    PubMed

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  12. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  13. Estimation of current density distribution of PAFC by analysis of cell exhaust gas

    SciTech Connect

    Kato, S.; Seya, A.; Asano, A.

    1996-12-31

    To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.

  14. Variational estimation of the drift for stochastic differential equations from the empirical density

    NASA Astrophysics Data System (ADS)

    Batz, Philipp; Ruttor, Andreas; Opper, Manfred

    2016-08-01

    We present a method for the nonparametric estimation of the drift function of certain types of stochastic differential equations from the empirical density. It is based on a variational formulation of the Fokker–Planck equation. The minimization of an empirical estimate of the variational functional using kernel based regularization can be performed in closed form. We demonstrate the performance of the method on second order, Langevin-type equations and show how the method can be generalized to other noise models.

  15. Segmentation of complementary DNA microarray images by wavelet-based Markov random field model.

    PubMed

    Athanasiadis, Emmanouil I; Cavouras, Dionisis A; Glotsos, Dimitris Th; Georgiadis, Pantelis V; Kalatzis, Ioannis K; Nikiforidis, George C

    2009-11-01

    A wavelet-based modification of the Markov random field (WMRF) model is proposed for segmenting complementary DNA (cDNA) microarray images. For evaluation purposes, five simulated and a set of five real microarray images were used. The one-level stationary wavelet transform (SWT) of each microarray image was used to form two images, a denoised image, using hard thresholding filter, and a magnitude image, from the amplitudes of the horizontal and vertical components of SWT. Elements from these two images were suitably combined to form the WMRF model for segmenting spots from their background. The WMRF was compared against the conventional MRF and the Fuzzy C means (FCM) algorithms on simulated and real microarray images and their performances were evaluated by means of the segmentation matching factor (SMF) and the coefficient of determination (r2). Additionally, the WMRF was compared against the SPOT and SCANALYZE, and performances were evaluated by the mean absolute error (MAE) and the coefficient of variation (CV). The WMRF performed more accurately than the MRF and FCM (SMF: 92.66, 92.15, and 89.22, r2 : 0.92, 0.90, and 0.84, respectively) and achieved higher reproducibility than the MRF, SPOT, and SCANALYZE (MAE: 497, 1215, 1180, and 503, CV: 0.88, 1.15, 0.93, and 0.90, respectively).

  16. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.

    PubMed

    Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  17. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  18. Online Epileptic Seizure Prediction Using Wavelet-Based Bi-Phase Correlation of Electrical Signals Tomography.

    PubMed

    Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh

    2015-09-01

    Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients. PMID:26126613

  19. Finding the multipath propagation of multivariable crude oil prices using a wavelet-based network approach

    NASA Astrophysics Data System (ADS)

    Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun

    2016-04-01

    The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.

  20. Selective error detection for error-resilient wavelet-based image coding.

    PubMed

    Karam, Lina J; Lam, Tuyet-Trang

    2007-12-01

    This paper introduces the concept of a similarity check function for error-resilient multimedia data transmission. The proposed similarity check function provides information about the effects of corrupted data on the quality of the reconstructed image. The degree of data corruption is measured by the similarity check function at the receiver, without explicit knowledge of the original source data. The design of a perceptual similarity check function is presented for wavelet-based coders such as the JPEG2000 standard, and used with a proposed "progressive similarity-based ARQ" (ProS-ARQ) scheme to significantly decrease the retransmission rate of corrupted data while maintaining very good visual quality of images transmitted over noisy channels. Simulation results with JPEG2000-coded images transmitted over the Binary Symmetric Channel, show that the proposed ProS-ARQ scheme significantly reduces the number of retransmissions as compared to conventional ARQ-based schemes. The presented results also show that, for the same number of retransmitted data packets, the proposed ProS-ARQ scheme can achieve significantly higher PSNR and better visual quality as compared to the selective-repeat ARQ scheme.

  1. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  2. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions

    PubMed Central

    Daqrouq, K.; Dobaie, A.

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  3. A study on discrete wavelet-based noise removal from EEG signals.

    PubMed

    Asaduzzaman, K; Reaz, M B I; Mohd-Yasin, F; Sim, K S; Hussain, M S

    2010-01-01

    Electroencephalogram (EEG) serves as an extremely valuable tool for clinicians and researchers to study the activity of the brain in a non-invasive manner. It has long been used for the diagnosis of various central nervous system disorders like seizures, epilepsy, and brain damage and for categorizing sleep stages in patients. The artifacts caused by various factors such as Electrooculogram (EOG), eye blink, and Electromyogram (EMG) in EEG signal increases the difficulty in analyzing them. Discrete wavelet transform has been applied in this research for removing noise from the EEG signal. The effectiveness of the noise removal is quantitatively measured using Root Mean Square (RMS) Difference. This paper reports on the effectiveness of wavelet transform applied to the EEG signal as a means of removing noise to retrieve important information related to both healthy and epileptic patients. Wavelet-based noise removal on the EEG signal of both healthy and epileptic subjects was performed using four discrete wavelet functions. With the appropriate choice of the wavelet function (WF), it is possible to remove noise effectively to analyze EEG significantly. Result of this study shows that WF Daubechies 8 (db8) provides the best noise removal from the raw EEG signal of healthy patients, while WF orthogonal Meyer does the same for epileptic patients. This algorithm is intended for FPGA implementation of portable biomedical equipments to detect different brain state in different circumstances.

  4. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  5. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis

    PubMed Central

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

  6. Wavelet-Based ECG Steganography for Protecting Patient Confidential Information in Point-of-Care Systems.

    PubMed

    Ibaida, Ayman; Khalil, Ibrahim

    2013-12-01

    With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as point-of-care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by body sensor networks from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level, etc., and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data are being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet-based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the percentage residual difference and the wavelet weighted PRD. It is found that the proposed technique provides high-security protection for patients data with low (less than 1%) distortion and ECG data remain diagnosable after watermarking (i.e., hiding patient confidential data) and as well as after watermarks (i.e., hidden data) are removed from the watermarked data.

  7. Radiation dose reduction in digital radiography using wavelet-based image processing methods

    NASA Astrophysics Data System (ADS)

    Watanabe, Haruyuki; Tsai, Du-Yih; Lee, Yongbum; Matsuyama, Eri; Kojima, Katsuyuki

    2011-03-01

    In this paper, we investigate the effect of the use of wavelet transform for image processing on radiation dose reduction in computed radiography (CR), by measuring various physical characteristics of the wavelet-transformed images. Moreover, we propose a wavelet-based method for offering a possibility to reduce radiation dose while maintaining a clinically acceptable image quality. The proposed method integrates the advantages of a previously proposed technique, i.e., sigmoid-type transfer curve for wavelet coefficient weighting adjustment technique, as well as a wavelet soft-thresholding technique. The former can improve contrast and spatial resolution of CR images, the latter is able to improve the performance of image noise. In the investigation of physical characteristics, modulation transfer function, noise power spectrum, and contrast-to-noise ratio of CR images processed by the proposed method and other different methods were measured and compared. Furthermore, visual evaluation was performed using Scheffe's pair comparison method. Experimental results showed that the proposed method could improve overall image quality as compared to other methods. Our visual evaluation showed that an approximately 40% reduction in exposure dose might be achieved in hip joint radiography by using the proposed method.

  8. A wavelet-based damage detection algorithm based on bridge acceleration response to a vehicle

    NASA Astrophysics Data System (ADS)

    Hester, D.; González, A.

    2012-04-01

    Previous research based on theoretical simulations has shown the potential of the wavelet transform to detect damage in a beam by analysing the time-deflection response due to a constant moving load. However, its application to identify damage from the response of a bridge to a vehicle raises a number of questions. Firstly, it may be difficult to record the difference in the deflection signal between a healthy and a slightly damaged structure to the required level of accuracy and high scanning frequencies in the field. Secondly, the bridge is going to have a road profile and it will be loaded by a sprung vehicle and time-varying forces rather than a constant load. Therefore, an algorithm based on a plot of wavelet coefficients versus time to detect damage (a singularity in the plot) appears to be very sensitive to noise. This paper addresses these questions by: (a) using the acceleration signal, instead of the deflection signal, (b) employing a vehicle-bridge finite element interaction model, and (c) developing a novel wavelet-based approach using wavelet energy content at each bridge section, which proves to be more sensitive to damage than a wavelet coefficient line plot at a given scale as employed by others.

  9. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    PubMed Central

    Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  10. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. PMID:26774422

  11. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions.

    PubMed

    Daqrouq, K; Dobaie, A

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  12. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.

    PubMed

    Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X

    2016-06-10

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).

  13. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  14. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  15. Estimation of the density of Buccinum undatum (Gastropoda) off Douglas, Isle of Man

    NASA Astrophysics Data System (ADS)

    Kideys, A. E.

    1993-02-01

    The density of the common whelk ( Buccinum undatum L.) off Douglas, Isle of Man, was determined by four methods: (1) pot sampling, (2) diving, (3) mark-recapture experiment, and (4) underwater television. Although the values obtained by these methods were comparable, the last two methods yielded overestimations of Buccinum density. The results from diving survey and from pot sampling showed a good agreement, indicating that pot sampling can be used to determine the density of the common whelk, provided a good estimate of the pot attraction area is available. The range of whelk density between February 1989 and August 1990 resulting from pot sampling was between 0.08 and 0.38 individuals m-2. The temporal fluctuations of the whelk densities are discussed in detail.

  16. Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.

    PubMed

    Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

    2014-01-01

    We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale.

  17. Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.

    PubMed

    Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

    2014-01-01

    We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

  18. Estimating food portions. Influence of unit number, meal type and energy density.

    PubMed

    Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A

    2013-12-01

    Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7-26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled "meal", 4 "drink" and 16 "snack". Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p=0.01). Single unit foods and those labelled as 'meal' or 'beverage' were estimated with greater error than multi-unit and 'snack' foods (p=0.02 and p<0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30-46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet.

  19. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  20. Sensitivity Analysis of Wavelet-based Approach to Multiresolution-Characterization and Scaling of Two-Dimensional Heterogeneous Fields

    NASA Astrophysics Data System (ADS)

    Hyun, Y.; Ahn, Y.

    2012-12-01

    A wavelet-based scaling approach has recently been used to characterize and/or upscale hydro-geologic variables with given Hurst coefficient, characteristic length scale, and orientation. A wavelet-based approach requires specifying a mother wavelet for wavelet analysis. We perform the sensitivity analysis of wavelet transforms to several types of mother wavelets in characterizing and scaling two-dimensional random fractal fields which are theoretically generated for various Hurst coefficient, characteristic lengths, and orientations. We use haar, Daubechies, Symlets, and Coiflets wavelets and compare the results. The numerical studies are carried out using Matlab wavelet toolbox. Results show that the Daubechies wavelet is most suitable for scaling random fractal fields with among various wavelets. In characterization of heterogeneous fields on a multiresolution, characteristic lengths inferred from simulated fields vary with mother wavelets. This study suggests that one should be careful in choosing a mother wavelet function for scaling studies by means of wavelet-based analyses for reliable results and no reliable results are expected for characterizing fractal fields on a multiresolution with various mother wavelets.

  1. Estimating abundance and density of Amur tigers along the Sino-Russian border.

    PubMed

    Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping

    2016-07-01

    As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China.

  2. Estimating abundance and density of Amur tigers along the Sino-Russian border.

    PubMed

    Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping

    2016-07-01

    As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China. PMID:27136188

  3. Estimation of dispersion parameters from photographic density measurements on smoke puffs

    NASA Astrophysics Data System (ADS)

    Yassky, D.

    An extension is proposed of methods that use "optical boundaries" of smoke-plumes in order to estimate atmospheric dispersion parameters. Use is made here of some properties of photographic optics and concentration distributions of light absorbing puffs having no multiple scattering. An array of relative photometric densities, measured on a single photograph of a puff, is shown to be of use in numerical estimation of a puff's dispersive parameters. The proposed method's performance is evaluated by means of computer simulation which includes estimates of the influence of photogrammetric and photometric errors. Future experimental validation of the proposed method may introduce fast and inexpensive ways of obtaining extensive atmospheric dispersion data bases.

  4. Estimation of electrical conductivity distribution within the human head from magnetic flux density measurement.

    PubMed

    Gao, Nuo; Zhu, S A; He, Bin

    2005-06-01

    We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.

  5. Estimating Densities of the Pest Halotydeus destructor (Acari: Penthaleidae) in Canola.

    PubMed

    Arthur, Aston L; Hoffmann, Ary A; Umina, Paul A

    2014-12-01

    Development of sampling techniques to effectively estimate invertebrate densities in the field is essential for effective implementation of pest control programs, particularly when making informed spray decisions around economic thresholds. In this article, we investigated the influence of several factors to devise a sampling strategy to estimate Halotydeus destructor Tucker densities in a canola paddock. Direct visual counts were found to be the most suitable approach for estimating mite numbers, with higher densities detected than the vacuum sampling method. Visual assessments were impacted by the operator, sampling date, and time of day. However, with the exception of operator (more experienced operator detected higher numbers of mites), no obvious trends were detected. No patterns were found between H. destructor numbers and ambient temperature, relative humidity, wind speed, cloud cover, or soil surface conditions, indicating that these factors may not be of high importance when sampling mites during autumn and winter months. We show further support for an aggregated distribution of H. destructor within paddocks, indicating that a stratified random sampling program is likely to be most appropriate. Together, these findings provide important guidelines for Australian growers around the ability to effectively and accurately estimate H. destructor densities. PMID:26470087

  6. Estimation of nighttime dip-equatorial E-region current density using measurements and models

    NASA Astrophysics Data System (ADS)

    Pandey, Kuldeep; Sekar, R.; Anandarao, B. G.; Gupta, S. P.; Chakrabarty, D.

    2016-08-01

    The existence of the possible ionospheric current during nighttime over low-equatorial latitudes is one of the unresolved issues in ionospheric physics and geomagnetism. A detailed investigation is carried out to estimate the same over Indian longitudes using in situ measurements from Thumba (8.5 ° N, 76.9 ° E), empirical plasma drift model (Fejer et al., 2008) and equatorial electrojet model developed by Anandarao (1976). This investigation reveals that the nighttime E-region current densities vary from ∼0.3 to ∼0.7 A/km2 during pre-midnight to early morning hours on geomagnetically quiet conditions. The nighttime current densities over the dip equator are estimated using three different methods (discussed in methodology section) and are found to be consistent with one another within the uncertainty limits. Altitude structures in the E-region current densities are also noticed which are shown to be associated with altitudinal structures in the electron densities. The horizontal component of the magnetic field induced by these nighttime ionospheric currents is estimated to vary between ∼2 and ∼6 nT during geomagnetically quiet periods. This investigation confirms the existence of nighttime ionospheric current and opens up a possibility of estimating base line value for geomagnetic field fluctuations as observed by ground-based magnetometer.

  7. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  8. Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Lovelace, Geoffrey

    2001-12-01

    Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy, we assume a flat universe and that the weak energy condition is satisfied, and we allow the density of dark energy, ρX(z), to be an arbitrary function of redshift. Using simulated data from a space-based SN pencil-beam survey, we find that by optimizing the number of parameters used to parameterize the dimensionless dark energy density, f(z)=ρX(z)/ρX(z=0), we can obtain an unbiased estimate of both f(z) and the fractional matter density of the universe, Ωm. A plausible SN pencil-beam survey (with a square degree field of view and for an observational duration of 1 yr) can yield about 2000 SNe Ia with 0<=z<=2. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of σ(mint)=0.16 mag. We find that for such an idealized survey, Ωm can be measured to 10% accuracy, and the dark energy density can be estimated to ~20% to z~1.5, and ~20%-40% to z~2, depending on the time dependence of the true dark energy density. Dark energy densities that vary more slowly can be more accurately measured. For the anticipated Supernova/Acceleration Probe (SNAP) mission, Ωm can be measured to 14% accuracy, and the dark energy density can be estimated to ~20% to z~1.2. Our results suggest that SNAP may gain much sensitivity to the time dependence of the dark energy density and Ωm by devoting more observational time to the central pencil-beam fields to obtain more SNe Ia at z>1.2. We use both a maximum likelihood analysis and a Monte Carlo analysis (when appropriate) to determine the errors of estimated parameters. We find that the Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis.

  9. Bioenergetics estimate of the effects of stocking density on hatchery production of smallmouth bass fingerlings

    USGS Publications Warehouse

    Robel, G.L.; Fisher, W.L.

    1999-01-01

    Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

  10. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  11. The effect of density estimation on the conservativeness in Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Suresh, Pranav; Kumar, S. S. Prasanna; Patnaik, B. S. V.

    2015-11-01

    Smoothed Particle Hydrodynamics (SPH) is a popular mesh-free method for solving a wide range of problems that involve interfaces. In SPH, the Lagrangian nature of the method enables mass conservation to be naturally satisfied. However, satisfying the conservation of momentum and energy are indeed formulation dependent. One major aspect of ensuring conservativeness comes from the density estimation. There are two distinct types of density estimation approaches, namely continuity density approach and summation density approach. Both approaches are indeed popular with single and multi-phase flow communities. In the present study, we assess the role of density evaluation on the conservativeness, using several representative numerical examples. In particular, we have simulated the Rayleigh-Taylor instability problem, Non-Boussinesq lock exchange problem, bubble rise in water column etc. Although for shorter time scales of simulation, both methods have similar conservative properties, we observe that for longer time scales, summation-density approach is better. For free surface detection and normal vector computations, efficient computational procedures have been devised.

  12. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  13. Density estimation of small-mammal populations using a trapping web and distance sampling methods

    USGS Publications Warehouse

    Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.

    1983-01-01

    Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

  14. Estimations of population density for selected periods between the Neolithic and AD 1800.

    PubMed

    Zimmermann, Andreas; Hilpert, Johanna; Wendt, Karl Peter

    2009-04-01

    Abstract We describe a combination of methods applied to obtain reliable estimations of population density using archaeological data. The combination is based on a hierarchical model of scale levels. The necessary data and methods used to obtain the results are chosen so as to define transfer functions from one scale level to another. We apply our method to data sets from western Germany that cover early Neolithic, Iron Age, Roman, and Merovingian times as well as historical data from AD 1800. Error margins and natural and historical variability are discussed. Our results for nonstate societies are always lower than conventional estimations compiled from the literature, and we discuss the reasons for this finding. At the end, we compare the calculated local and global population densities with other estimations from different parts of the world.

  15. Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

    NASA Astrophysics Data System (ADS)

    Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

    2009-04-01

    Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

  16. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation.

    PubMed

    Mohseni, Hamid R; Kringelbach, Morten L; Woolrich, Mark W; Baker, Adam; Aziz, Tipu Z; Probert-Smith, Penny

    2014-02-15

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate.

  17. Estimation of energy density of Li-S batteries with liquid and solid electrolytes

    NASA Astrophysics Data System (ADS)

    Li, Chunmei; Zhang, Heng; Otaegui, Laida; Singh, Gurpreet; Armand, Michel; Rodriguez-Martinez, Lide M.

    2016-09-01

    With the exponential growth of technology in mobile devices and the rapid expansion of electric vehicles into the market, it appears that the energy density of the state-of-the-art Li-ion batteries (LIBs) cannot satisfy the practical requirements. Sulfur has been one of the best cathode material choices due to its high charge storage (1675 mAh g-1), natural abundance and easy accessibility. In this paper, calculations are performed for different cell design parameters such as the active material loading, the amount/thickness of electrolyte, the sulfur utilization, etc. to predict the energy density of Li-S cells based on liquid, polymeric and ceramic electrolytes. It demonstrates that Li-S battery is most likely to be competitive in gravimetric energy density, but not volumetric energy density, with current technology, when comparing with LIBs. Furthermore, the cells with polymer and thin ceramic electrolytes show promising potential in terms of high gravimetric energy density, especially the cells with the polymer electrolyte. This estimation study of Li-S energy density can be used as a good guidance for controlling the key design parameters in order to get desirable energy density at cell-level.

  18. Electron density estimations derived from spacecraft potential measurements on Cluster in tenuous plasma regions

    NASA Astrophysics Data System (ADS)

    Pedersen, A.; Lybekk, B.; André, M.; Eriksson, A.; Masson, A.; Mozer, F. S.; Lindqvist, P.-A.; DéCréAu, P. M. E.; Dandouras, I.; Sauvaud, J.-A.; Fazakerley, A.; Taylor, M.; Paschmann, G.; Svenes, K. R.; Torkar, K.; Whipple, E.

    2008-07-01

    Spacecraft potential measurements by the EFW electric field experiment on the Cluster satellites can be used to obtain plasma density estimates in regions barely accessible to other type of plasma experiments. Direct calibrations of the plasma density as a function of the measured potential difference between the spacecraft and the probes can be carried out in the solar wind, the magnetosheath, and the plasmashere by the use of CIS ion density and WHISPER electron density measurements. The spacecraft photoelectron characteristic (photoelectrons escaping to the plasma in current balance with collected ambient electrons) can be calculated from knowledge of the electron current to the spacecraft based on plasma density and electron temperature data from the above mentioned experiments and can be extended to more positive spacecraft potentials by CIS ion and the PEACE electron experiments in the plasma sheet. This characteristic enables determination of the electron density as a function of spacecraft potential over the polar caps and in the lobes of the magnetosphere, regions where other experiments on Cluster have intrinsic limitations. Data from 2001 to 2006 reveal that the photoelectron characteristics of the Cluster spacecraft as well as the electric field probes vary with the solar cycle and solar activity. The consequences for plasma density measurements are addressed. Typical examples are presented to demonstrate the use of this technique in a polar cap/lobe plasma.

  19. A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars

    NASA Astrophysics Data System (ADS)

    Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong

    2016-04-01

    A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.

  20. Wavelet-based identification of DNA focal genomic aberrations from single nucleotide polymorphism arrays

    PubMed Central

    2011-01-01

    Background Copy number aberrations (CNAs) are an important molecular signature in cancer initiation, development, and progression. However, these aberrations span a wide range of chromosomes, making it hard to distinguish cancer related genes from other genes that are not closely related to cancer but are located in broadly aberrant regions. With the current availability of high-resolution data sets such as single nucleotide polymorphism (SNP) microarrays, it has become an important issue to develop a computational method to detect driving genes related to cancer development located in the focal regions of CNAs. Results In this study, we introduce a novel method referred to as the wavelet-based identification of focal genomic aberrations (WIFA). The use of the wavelet analysis, because it is a multi-resolution approach, makes it possible to effectively identify focal genomic aberrations in broadly aberrant regions. The proposed method integrates multiple cancer samples so that it enables the detection of the consistent aberrations across multiple samples. We then apply this method to glioblastoma multiforme and lung cancer data sets from the SNP microarray platform. Through this process, we confirm the ability to detect previously known cancer related genes from both cancer types with high accuracy. Also, the application of this approach to a lung cancer data set identifies focal amplification regions that contain known oncogenes, though these regions are not reported using a recent CNAs detecting algorithm GISTIC: SMAD7 (chr18q21.1) and FGF10 (chr5p12). Conclusions Our results suggest that WIFA can be used to reveal cancer related genes in various cancer data sets. PMID:21569311

  1. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing

    PubMed Central

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.

    2014-01-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  2. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  3. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing.

    PubMed

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R

    2014-10-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  4. Wavelet-based compression of medical images: filter-bank selection and evaluation.

    PubMed

    Saffor, A; bin Ramli, A R; Ng, K H

    2003-06-01

    Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

  5. Statistically significant contrasts between EMG waveforms revealed using wavelet-based functional ANOVA.

    PubMed

    McKay, J Lucas; Welch, Torrence D J; Vidakovic, Brani; Ting, Lena H

    2013-01-01

    We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA (P < 0.02). Furthermore, wfANOVA required significantly fewer (~1/4;×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision (r(2) = 0.94 ± 0.08) and performed better than tANOVA across noise levels (P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power. PMID:23100136

  6. Joint estimation of crown of thorns (Acanthaster planci) densities on the Great Barrier Reef

    PubMed Central

    Mellin, Camille; Pratchett, Morgan S.; Hoey, Jessica; Anthony, Kenneth R.N.; Cheal, Alistair J.; Miller, Ian; Sweatman, Hugh; Cowan, Zara L.; Taylor, Sascha; Moon, Steven; Fonnesbeck, Chris J.

    2016-01-01

    Crown-of-thorns starfish (CoTS; Acanthaster spp.) are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1) estimating the detectability of adult CoTS on typical underwater visual count (UVC) surveys using covariates and (2) inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR). We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD) and [95% uncertainty intervals]), with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014. PMID:27635314

  7. Joint estimation of crown of thorns (Acanthaster planci) densities on the Great Barrier Reef.

    PubMed

    MacNeil, M Aaron; Mellin, Camille; Pratchett, Morgan S; Hoey, Jessica; Anthony, Kenneth R N; Cheal, Alistair J; Miller, Ian; Sweatman, Hugh; Cowan, Zara L; Taylor, Sascha; Moon, Steven; Fonnesbeck, Chris J

    2016-01-01

    Crown-of-thorns starfish (CoTS; Acanthaster spp.) are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1) estimating the detectability of adult CoTS on typical underwater visual count (UVC) surveys using covariates and (2) inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR). We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD) and [95% uncertainty intervals]), with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014.

  8. Joint estimation of crown of thorns (Acanthaster planci) densities on the Great Barrier Reef.

    PubMed

    MacNeil, M Aaron; Mellin, Camille; Pratchett, Morgan S; Hoey, Jessica; Anthony, Kenneth R N; Cheal, Alistair J; Miller, Ian; Sweatman, Hugh; Cowan, Zara L; Taylor, Sascha; Moon, Steven; Fonnesbeck, Chris J

    2016-01-01

    Crown-of-thorns starfish (CoTS; Acanthaster spp.) are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1) estimating the detectability of adult CoTS on typical underwater visual count (UVC) surveys using covariates and (2) inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR). We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD) and [95% uncertainty intervals]), with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014. PMID:27635314

  9. Density estimation in a wolverine population using spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  10. Joint estimation of crown of thorns (Acanthaster planci) densities on the Great Barrier Reef

    PubMed Central

    Mellin, Camille; Pratchett, Morgan S.; Hoey, Jessica; Anthony, Kenneth R.N.; Cheal, Alistair J.; Miller, Ian; Sweatman, Hugh; Cowan, Zara L.; Taylor, Sascha; Moon, Steven; Fonnesbeck, Chris J.

    2016-01-01

    Crown-of-thorns starfish (CoTS; Acanthaster spp.) are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1) estimating the detectability of adult CoTS on typical underwater visual count (UVC) surveys using covariates and (2) inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR). We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD) and [95% uncertainty intervals]), with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014.

  11. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    SciTech Connect

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  12. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  13. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  14. Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density

    PubMed Central

    Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A.

    2009-01-01

    We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f0 = exp ϕ0 where ϕ0 is a concave function on ℝ. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ∞) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of Hk, the “lower invelope” of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of ϕ0 = log f0 at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f0) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values. PMID:19881896

  15. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification

  16. Scatterer number density considerations in reference phantom-based attenuation estimation.

    PubMed

    Rubert, Nicholas; Varghese, Tomy

    2014-07-01

    Attenuation estimation and imaging have the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue-mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and a signal-to-noise ratio (SNR) approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1647 regions of interest in five ex vivo bovine livers, we obtained an envelope SNR of 1.10 ± 0.12 when the tissue was imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article, we examine attenuation estimation in numerical phantoms, tissue-mimicking phantoms with variable SNDs and ex vivo bovine liver before and after thermal coagulation. We find that reference phantom-based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SNDs, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find that the standard deviation of attenuation slope estimates increases from 0.07 to 0.25 dB/cm-MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in tissue-mimicking phantoms with a large estimation kernel size (16 mm axially × 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (<0.01 dB/cm-MHz). We also compare results obtained with reference phantom-based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800

  17. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone

  18. Combining Breeding Bird Survey and distance sampling to estimate density of migrant and breeding birds

    USGS Publications Warehouse

    Somershoe, S.G.; Twedt, D.J.; Reid, B.

    2006-01-01

    We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

  19. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  20. Bayesian nonparametric regression and density estimation using integrated nested Laplace approximations.

    PubMed

    Wang, Xiao-Feng

    2013-06-25

    Integrated nested Laplace approximations (INLA) are a recently proposed approximate Bayesian approach to fit structured additive regression models with latent Gaussian field. INLA method, as an alternative to Markov chain Monte Carlo techniques, provides accurate approximations to estimate posterior marginals and avoid time-consuming sampling. We show here that two classical nonparametric smoothing problems, nonparametric regression and density estimation, can be achieved using INLA. Simulated examples and R functions are demonstrated to illustrate the use of the methods. Some discussions on potential applications of INLA are made in the paper.

  1. Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type

    NASA Astrophysics Data System (ADS)

    Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.

    2016-01-01

    Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international data sets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon type bulk densities using local known bulk density data sets. Then the best performing of the pedotransfer functions were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data were missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known data sets had the best predictions, using the recalibrated and validated pedotransfer functions.

  2. Electron density estimation in cold magnetospheric plasmas with the Cluster Active Archive

    NASA Astrophysics Data System (ADS)

    Masson, A.; Pedersen, A.; Taylor, M. G.; Escoubet, C. P.; Laakso, H. E.

    2009-12-01

    Electron density is a key physical quantity to characterize any plasma medium. Its measurement is thus essential to understand the various physical processes occurring in the environment of a magnetized planet. However, any magnetosphere of the solar system is far from being an homogeneous medium with a constant electron density and temperature. For instance, the Earth’s magnetosphere is composed of a variety of regions with densities and temperatures spanning over at least 6 decades of magnitude. For this reason, different types of scientific instruments are usually carried onboard a magnetospheric spacecraft to estimate in situ the electron density of the various plasma regions crossed by different means. In the case of the European Space Agency Cluster mission, five different instruments on each of its four identical spacecraft can be used to estimate it: two particle instruments, a DC electric field instrument, a relaxation sounder and a high-time resolution passive wave receiver. Each of these instruments has its pros and cons depending on the plasma conditions. The focus of this study is the accurate estimation of the electron density in cold plasma regions of the magnetosphere including the magnetotail lobes (Ne ≤ 0.01 e-/cc, Te ~ 100 eV) and the plasmasphere (Ne> 10 e-/cc, Te <10 eV). In these regions, particle instruments can be blind to low energy ions outflowing from the ionosphere or measuring only a portion of the energy range of the particles due to photoelectrons. This often results in an under estimation of the bulk density. Measurements from a relaxation sounder enables accurate estimation of the bulk electron density above a fraction of 1 e-/cc but requires careful calibration of the resonances and/or the cutoffs detected. On Cluster, active soundings enable to derive precise density estimates between 0.2 and 80 e-/cc every minute or two. Spacecraft-to-probe difference potential measurements from a double probe electric field experiment can be

  3. Kernel density estimation-based solution of the nuclear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Unke, Oliver Thorsten; Meuwly, Markus

    2015-10-01

    Solving the time-dependent Schrödinger equation for nuclear motion remains a challenge. Despite novel approaches based on Bohmian mechanics, the long-time stability and generalization to multiple dimensions remains an open question. In the present work a method based on an ensemble of classical particles instead of a wave function is employed to evolve the system. Quantum effects are introduced through forces derived from the quantum potential Q and the necessary derivatives are obtained from a density estimate using kernel density estimation. Application of the procedure to typical 1- and 2-dimensional problems yields good agreement with numerically exact solutions and favourable scaling with the number of particles is found.

  4. Estimation of high-resolution dust column density maps. Empirical model fits

    NASA Astrophysics Data System (ADS)

    Juvela, M.; Montillaud, J.

    2013-09-01

    Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.

  5. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    SciTech Connect

    Stewart, Robert N; White, Devin A; Urban, Marie L; Morton, April M; Webster, Clayton G; Stoyanov, Miroslav K; Bright, Eddie A; Bhaduri, Budhendra L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  6. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  7. Joint estimation of population density functions and the location of the central business district.

    PubMed

    Alperovich, G; Deutsch, J

    1994-11-01

    "In this paper we [propose] a new procedure for estimating population density functions under conditions that the exact location of the CBD [central business district] is unknown or uncertain. As such it can also be utilized as a method for identifying the location of the CBD....[We apply] this method to cross-sectional data from Tel-Aviv-Yafo [Israel] during 1961 through 1990...."

  8. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    NASA Astrophysics Data System (ADS)

    Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

    2012-06-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

  9. New Density Estimation Methods for Charged Particle Beams With Applications to Microbunching Instability

    SciTech Connect

    Balsa Terzic, Gabriele Bassi

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  10. Use of prediction methods to estimate true density of active pharmaceutical ingredients.

    PubMed

    Cao, Xiaoping; Leyva, Norma; Anderson, Stephen R; Hancock, Bruno C

    2008-05-01

    True density is a fundamental and important property of active pharmaceutical ingredients (APIs). Using prediction methods to estimate the API true density can be very beneficial in pharmaceutical research and development, especially when experimental measurements cannot be made due to lack of material or sample handling restrictions. In this paper, two empirical prediction methods developed by Girolami and Immirzi and Perini were used to estimate the true density of APIs, and the estimation results were compared with experimentally measured values by helium pycnometry. The Girolami method is simple and can be used for both liquids and solids. For the tested APIs, the Girolami method had a maximum error of -12.7% and an average percent error of -3.0% with a 95% CI of (-3.8, -2.3%). The Immirzi and Perini method is more involved and is mainly used for solid crystals. In general, it gives better predictions than the Girolami method. For the tested APIs, the Immirzi and Perini method had a maximum error of 9.6% and an average percent error of 0.9% with a 95% CI of (0.3, 1.6%). PMID:18242023

  11. Neutral density estimation derived from meteoroid measurements using high-power, large-aperture radar

    NASA Astrophysics Data System (ADS)

    Li, A.; Close, S.

    2016-07-01

    We present a new method to estimate the neutral density of the lower thermosphere/upper mesosphere given deceleration measurements from meteoroids as they enter Earth's atmosphere. By tracking the plasma (referred to as head echoes) surrounding the ablating meteoroid, we are able to measure the range and velocity of the meteoroid in 3-D. This is accomplished at Advanced Research Projects Agency Long-Range Tracking and Instrumentation Radar (ALTAIR) with the use of four additional receiving horns. Combined with the momentum and ablation equations, we can feed large quantities of data into a minimization function which estimates the associated constants related to the ablation process and, more importantly, the density ratios between successive layers of the atmosphere. Furthermore, if we take statistics of the masses and bulk densities of the meteoroids, we can calculate the neutral densities and its associated error by the ratio distribution on the minimum error statistic. A standard deviation of approximately 10% can be achieved, neglecting measurement error from the radar. Errors in velocity and deceleration compound this uncertainty, which in the best case amounts to an additional 4% error. The accuracy can be further improved if we take increasing amounts of measurements, limited only by the quality of the ranging measurements and the probability of knowing the median of the distribution. Data analyzed consist mainly of approximately 500 meteoroids over a span of 20 min on two separate days. The results are compared to the existing atmospheric model NRLMSISE-00, which predicts lower density ratios and static neutral densities at these altitudes.

  12. Integrated Bayesian Estimation of Zeff in the TEXTOR Tokamak from Bremsstrahlung and CX Impurity Density Measurements

    NASA Astrophysics Data System (ADS)

    Verdoolaege, G.; Von Hellermann, M. G.; Jaspers, R.; Ichir, M. M.; Van Oost, G.

    2006-11-01

    The validation of diagnostic date from a nuclear fusion experiment is an important issue. The concept of an Integrated Data Analysis (IDA) allows the consistent estimation of plasma parameters from heterogeneous data sets. Here, the determination of the ion effective charge (Zeff) is considered. Several diagnostic methods exist for the determination of Zeff, but the results are in general not in agreement. In this work, the problem of Zeff estimation on the TEXTOR tokamak is approached from the perspective of IDA, in the framework of Bayesian probability theory. The ultimate goal is the estimation of a full Zeff profile that is consistent both with measured bremsstrahlung emissivities, as well as individual impurity spectral line intensities obtained from Charge Exchange Recombination Spectroscopy (CXRS). We present an overview of the various uncertainties that enter the calculation of a Zeff profile from bremsstrahlung date on the one hand, and line intensity data on the other hand. We discuss a simple linear and nonlinear Bayesian model permitting the estimation of a central value for Zeff and the electron density ne on TEXTOR from bremsstrahlung emissivity measurements in the visible, and carbon densities derived from CXRS. Both the central Zeff and ne are sampled using an MCMC algorithm. An outlook is given towards possible model improvements.

  13. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time

  14. Multiscale seismic characterization of marine sediments by using a wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique

    2015-04-01

    We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lévy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare

  15. Target detection in inhomogeneous non-Gaussian hyperspectral data based on nonparametric density estimation

    NASA Astrophysics Data System (ADS)

    Tidhar, G. A.; Rotman, S. R.

    2013-05-01

    Performance of algorithms for target signal detection in Hyperspectral Imagery (HSI) is often deteriorated when the data is neither statistically homogeneous nor Gaussian or when its Joint Probability Density (JPD) does not match any presumed particular parametric model. In this paper we propose a novel detection algorithm which first attempts at dividing data domain into mostly Gaussian and mostly Non-Gaussian (NG) subspaces, and then estimates the JPD of the NG subspace with a non-parametric Graph-based estimator. It then combines commonly used detection algorithms operating on the mostly-Gaussian sub-space and an LRT calculated directly with the estimated JPD of the NG sub-space, to detect anomalies and known additive-type target signals. The algorithm performance is compared to commonly used algorithms and is found to be superior in some important cases.

  16. Validation tests of an improved kernel density estimation method for identifying disease clusters

    NASA Astrophysics Data System (ADS)

    Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra

    2012-07-01

    The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

  17. Estimates of the Electron Density Profile on LTX Using FMCW Reflectometry and mm-Wave Interferometry

    NASA Astrophysics Data System (ADS)

    Peebles, W. A.; Kubota, S.; Nguyen, X. V.; Holoman, T.; Kaita, R.; Kozub, T.; Labrie, D.; Schmitt, J. C.; Majeski, R.

    2014-10-01

    An FMCW (frequency-modulated continuous-wave) reflectometer has been installed on the Lithium Tokamak Experiment (LTX) for electron density profile and fluctuation measurements. This diagnostic consists of two channels using bistatic antennas with a combined frequency coverage of 13.5 -33 GHz, which corresponds to electron density measurements in the range of 0 . 2 - 1 . 3 ×1013 cm-3 (in O-mode). Initial measurements will utilize O-mode polarization, which will require modeling of the plasma edge. Reflections from the center stack (delayometry above the peak cutoff frequency), as well as line density measurements from a 296 GHz interferometer (single-chord, radial midplane), will provide constraints for the profile reconstruction/estimate. Typical chord-averaged line densities on LTX range from 2 -6 ×1012 cm-3, which correspond to peak densities of 0 . 6 - 1 . 8 ×1013 cm-3 assuming a parabolic shape. If available, EFIT/LRDFIT results will provide additional constraints, as well as the possibility of utilizing data from measurements with X-mode or dual-mode (simultaneous O- and X-mode) polarization. Supported by U.S. DoE Grants DE-FG02-99ER54527 and DE-AC02-09CH11466.

  18. Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type

    NASA Astrophysics Data System (ADS)

    Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.

    2015-10-01

    Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international datasets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon types using local known bulk density datasets. Then the best performing of the pedotransfer functions, were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data was missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known datasets had the best predictions, using the recalibrated and validated pedotransfer functions.

  19. mBEEF-vdW: Robust fitting of error estimation density functionals

    NASA Astrophysics Data System (ADS)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas

    2016-06-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.

  20. New type of wavelet-based spectral analysis by which modes with different toroidal mode number are separated

    NASA Astrophysics Data System (ADS)

    Ohdachi, S.

    2016-11-01

    A new type of wavelet-based analysis for the magnetic fluctuations by which toroidal mode number can be resolved is proposed. By using a wavelet, having a different phase toroidally, a spectrogram with a specific toroidal mode number can be obtained. When this analysis is applied to the measurement of the fluctuations observed in the large helical device, MHD activities having similar frequency in the laboratory frame can be separated from the difference of the toroidal mode number. It is useful for the non-stationary MHD activity. This method is usable when the toroidal magnetic probes are not symmetrically distributed.

  1. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America

    USGS Publications Warehouse

    Chestnut, Tara E.; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  2. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    PubMed

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ∼3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  3. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    PubMed

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ∼3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  4. Heterogeneous Occupancy and Density Estimates of the Pathogenic Fungus Batrachochytrium dendrobatidis in Waters of North America

    PubMed Central

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  5. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  6. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  7. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles

    PubMed Central

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  8. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  9. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  10. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  11. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    USGS Publications Warehouse

    Boersen, Mark R.; Clark, Joseph D.; King, Tim L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  12. Density-based load estimation using two-dimensional finite element models: a parametric study.

    PubMed

    Bona, Max A; Martin, Larry D; Fischer, Kenneth J

    2006-08-01

    A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530

  13. Estimating the neutrally buoyant energy density of a Rankine-cycle/fuel-cell underwater propulsion system

    NASA Astrophysics Data System (ADS)

    Waters, Daniel F.; Cadou, Christopher P.

    2014-02-01

    A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (∼15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

  14. Estimating Absolute Salinity (SA) in the World's Oceans Using Density and Composition

    NASA Astrophysics Data System (ADS)

    Woosley, R. J.; Huang, F.; Millero, F. J., Jr.

    2014-12-01

    The practical salinity (Sp), which is determined by the relationship of conductivity to the known proportions of the major components of seawater, and reference salinity (SR = (35.16504/35)*Sp), do not account for variations in physical properties such as density and enthalpy. Trace and minor components of seawater, such as nutrients or inorganic carbon and total alkalinity affect these properties and contribute to the absolute salinity (SA). This limitation has been recognized and several studies have been made to estimate the effect of these compositional changes on the conductivity-density relationship. These studies have been limited in number and geographic scope. Here, we combine the measurements of previous studies with new measurements for a total of 2,857 conductivity-density measurements covering all of the world's major oceans to derive empirical equations for the effect of silica and total alkalinity on the density and absolute salinity of the global oceans and recommend an equation applicable to most of the world oceans. The potential impact on salinity as a result of uptake of anthropogenic CO2 is also discussed.

  15. Magnetic fields, plasma densities, and plasma beta parameters estimated from high-frequency zebra fine structures

    NASA Astrophysics Data System (ADS)

    Karlický, M.; Jiricka, K.

    2002-10-01

    Using the recent model of the radio zebra fine structures (Ledenev et al. 2001) the magnetic fields, plasma densities, and plasma beta parameters are estimated from high-frequency zebra fine structures. It was found that in the flare radio source of high-frequency (1-2 GHz) zebras the densities and magnetic fields vary in the intervals of (1-4)×1010 cm-3 and 40-230 G, respectively. Assuming then the flare temperature as about of 107K, the plasma beta parameters in the zebra radio sources are in the 0.05-0.81 interval. Thus the plasma pressure effects in such radio sources, especially in those with many zebra lines, are not negligible.

  16. "Prospecting Asteroids: Indirect technique to estimate overall density and inner composition"

    NASA Astrophysics Data System (ADS)

    Such, Pamela

    2016-07-01

    Spectroscopic studies of asteroids make possible to obtain some information on their composition from the surface but say little about the innermost material, porosity and density of the object. In addition, spectroscopic observations are affected by the effects of "space weathering" produced by the bombardment of charged particles for certain materials that change their chemical structure, albedo and other physical properties, partly altering their chances of identification. Data such as the mass, size and density of the asteroids are essential at the time to propose space missions in order to determine the best candidates for space exploration and is of great importance to determine a priori any of them remotely from Earth. From many years ago its determined masses of largest asteroids studying the gravitational effects they have on smaller asteroids when they approach them (see Davis and Bender, 1977; Schubart and Matson, 1979; School et al 1987; Hoffman, 1989b, among others), but estimates of the masses of the smallest objects is limited to the effects that occur in extreme close encounters to other asteroids of similar size. This paper presents the results of a search for approaches of pair of asteroids that approximate distances less than 0.0004 UA (50,000 km) of each other in order to study their masses through the astrometric method and to estimate in a future their densities and internal composition. References Davis, D. R., and D. F. Bender. 1977. Asteroid mass determinations: search for futher encounter opportunities. Bull. Am. Astron. Soc. 9, 502-503. Hoffman, M. 1989b. Asteroid mass determination: Present situation and perspectives. In asteroids II (R. P. Binzel, T. Gehreis, and M. S. Matthews, Eds.), pp 228-239. Univ. Arizona Press, Tucson. School, H. L. D. Schmadel and S. Roser 1987. The mass of the asteroid (10) Hygiea derived from observations of (829) Academia. Astron. Astrophys. 179, 311-316. Schubart, J. And D. L. Matson 1979. Masses and

  17. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  18. Automated voxelization of 3D atom probe data through kernel density estimation.

    PubMed

    Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna

    2015-12-01

    Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the γ / γ' interface in a Ni-Al-Cr superalloy. PMID:25825028

  19. Estimations of electric field effects on the oxygen reduction reaction based on the density functional theory.

    PubMed

    Karlberg, G S; Rossmeisl, J; Nørskov, J K

    2007-10-01

    By varying the external electric field in density functional theory (DFT) calculations we have estimated the impact of the local electric field in the electric double layer on the oxygen reduction reaction (ORR). Potentially, including the local electric field could change adsorption energies and barriers substantially, thereby affecting the reaction mechanism predicted for ORR on different metals. To estimate the effect of local electric fields on ORR we combine the DFT results at various external electric field strengths with a previously developed model of electrochemical reactions which fully accounts for the effect of the electrode potential. We find that the local electric field only slightly affects the output of the model. Hence, the general picture obtained without inclusion of the electric field still persists. However, for accurate predictions at oxygen reduction potentials close to the volcano top local electric field effects may be of importance.

  20. Validation tests of an improved kernel density estimation method for identifying disease clusters

    SciTech Connect

    Cai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L

    2011-01-01

    The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method that include a spatial basis of support designed to give a constant standard error for the standardized mortality/morbidity rate; a stair-case weight method for weighting observations to reduce estimation bias; and a method for selecting parameters to control three measures of performance of the method: sensitivity, specificity and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

  1. Daniell method for power spectral density estimation in atomic force microscopy.

    PubMed

    Labuda, Aleksander

    2016-03-01

    An alternative method for power spectral density (PSD) estimation--the Daniell method--is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion--the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to a more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum. PMID:27036781

  2. Bayesian semiparametric power spectral density estimation with applications in gravitational wave data analysis

    NASA Astrophysics Data System (ADS)

    Edwards, Matthew C.; Meyer, Renate; Christensen, Nelson

    2015-09-01

    The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a blocked Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with nonstationary data by breaking longer data streams into smaller and locally stationary components.

  3. Haze effect removal from image via haze density estimation in optical model.

    PubMed

    Yeh, Chia-Hung; Kang, Li-Wei; Lee, Ming-Sui; Lin, Cheng-Yang

    2013-11-01

    Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior.

  4. TME12/400: Application Oriented Wavelet-based Coding of Volumetric Medical Data

    PubMed Central

    Menegaz, G; Grewe, L; Lozano, A; Thiran, J-Ph

    1999-01-01

    Introduction While medical data are increasingly acquired in a multidimensional space, in clinical practice they are mainly still analyzed as images. We propose a wavelet-based coding technique exploiting the full dimensionality of the data distribution while allowing to recover a single image without any need to decode the whole volume. The proposed compression scheme is based on the Layered Zero Coding (LZC) method. Two modes are considered. In the progressive (PROG) mode, the volume is processed as a whole, while in the layer-per-layer (LPL) one each layer of each sub-band is encoded independently. The three-dimensional extension of the Embedded Zerotree Wavelet (EZW) coder is used as reference for coding efficiency. All working modalities provide a fully embedded bit-stream allowing a progressive by quality recovering of the encoded information. Methods The 3D DWT is performed mapping integers to integers thus allowing lossless compression. Two different coding systems have been considered: EZW and LZC. LZC models the expected statistical dependencies among coefficients by defining some conditional terms (contexts) which summarize the significance state of the samples belonging to a generalized neighborhood of the coefficient being encoded. Such terms are then used by a context adaptive arithmetic coder. The LPL mode has been designed in order to be able to independently decode any image of the dataset, and it is derived from the PROG mode by over-constraining the system. The sub-bands are quantized and encoded according to a sequence of uniform quantizers with decreasing step-size. This ensures progressiveness capabilities when decoding both the whole volume and a single image. Results Performances have been evaluated on two datasets: DSR and ANGIO, an opthalmologic angiographic sequence. For each mode the best context has been retained. Results show that the proposed system is competitive with EZW, and PROG mode is the more performant. The main factors

  5. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  6. A more appropriate white blood cell count for estimating malaria parasite density in Plasmodium vivax patients in northeastern Myanmar.

    PubMed

    Liu, Huaie; Feng, Guohua; Zeng, Weilin; Li, Xiaomei; Bai, Yao; Deng, Shuang; Ruan, Yonghua; Morris, James; Li, Siman; Yang, Zhaoqing; Cui, Liwang

    2016-04-01

    The conventional method of estimating parasite densities employ an assumption of 8000 white blood cells (WBCs)/μl. However, due to leucopenia in malaria patients, this number appears to overestimate parasite densities. In this study, we assessed the accuracy of parasite density estimated using this assumed WBC count in eastern Myanmar, where Plasmodium vivax has become increasingly prevalent. From 256 patients with uncomplicated P. vivax malaria, we estimated parasite density and counted WBCs by using an automated blood cell counter. It was found that WBC counts were not significantly different between patients of different gender, axillary temperature, and body mass index levels, whereas they were significantly different between age groups of patients and the time points of measurement. The median parasite densities calculated with the actual WBC counts (1903/μl) and the assumed WBC count of 8000/μl (2570/μl) were significantly different. We demonstrated that using the assumed WBC count of 8000 cells/μl to estimate parasite densities of P. vivax malaria patients in this area would lead to an overestimation. For P. vivax patients aged five years and older, an assumed WBC count of 5500/μl best estimated parasite densities. This study provides more realistic assumed WBC counts for estimating parasite densities in P. vivax patients from low-endemicity areas of Southeast Asia.

  7. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  8. Density and Biomass Estimates by Removal for an Amazonian Crocodilian, Paleosuchus palpebrosus

    PubMed Central

    2016-01-01

    Direct counts of crocodilians are rarely feasible and it is difficult to meet the assumptions of mark-recapture methods for most species in most habitats. Catch-out experiments are also usually not logistically or morally justifiable because it would be necessary to destroy the habitat in order to be confident that most individuals had been captured. We took advantage of the draining and filling of a large area of flooded forest during the building of the Santo Antônio dam on the Madeira River to obtain accurate estimates of the density and biomass of Paleosuchus palpebrosus. The density, 28.4 non-hatchling individuals per km2, is one of the highest reported for any crocodilian, except for species that are temporarily concentrated in small areas during dry-season drought. The biomass estimate of 63.15 kg*km-2 is higher than that for most or even all mammalian carnivores in tropical forest. P. palpebrosus may be one of the World´s most abundant crocodilians. PMID:27224473

  9. Gene Ontology density estimation and discourse analysis for automatic GeneRiF extraction

    PubMed Central

    Gobeill, Julien; Tbahriti, Imad; Ehrler, Frédéric; Mottaz, Anaïs; Veuthey, Anne-Lise; Ruch, Patrick

    2008-01-01

    Background This paper describes and evaluates a sentence selection engine that extracts a GeneRiF (Gene Reference into Functions) as defined in ENTREZ-Gene based on a MEDLINE record. Inputs for this task include both a gene and a pointer to a MEDLINE reference. In the suggested approach we merge two independent sentence extraction strategies. The first proposed strategy (LASt) uses argumentative features, inspired by discourse-analysis models. The second extraction scheme (GOEx) uses an automatic text categorizer to estimate the density of Gene Ontology categories in every sentence; thus providing a full ranking of all possible candidate GeneRiFs. A combination of the two approaches is proposed, which also aims at reducing the size of the selected segment by filtering out non-content bearing rhetorical phrases. Results Based on the TREC-2003 Genomics collection for GeneRiF identification, the LASt extraction strategy is already competitive (52.78%). When used in a combined approach, the extraction task clearly shows improvement, achieving a Dice score of over 57% (+10%). Conclusions Argumentative representation levels and conceptual density estimation using Gene Ontology contents appear complementary for functional annotation in proteomics. PMID:18426554

  10. Density and Biomass Estimates by Removal for an Amazonian Crocodilian, Paleosuchus palpebrosus.

    PubMed

    Campos, Zilca; Magnusson, William E

    2016-01-01

    Direct counts of crocodilians are rarely feasible and it is difficult to meet the assumptions of mark-recapture methods for most species in most habitats. Catch-out experiments are also usually not logistically or morally justifiable because it would be necessary to destroy the habitat in order to be confident that most individuals had been captured. We took advantage of the draining and filling of a large area of flooded forest during the building of the Santo Antônio dam on the Madeira River to obtain accurate estimates of the density and biomass of Paleosuchus palpebrosus. The density, 28.4 non-hatchling individuals per km2, is one of the highest reported for any crocodilian, except for species that are temporarily concentrated in small areas during dry-season drought. The biomass estimate of 63.15 kg*km-2 is higher than that for most or even all mammalian carnivores in tropical forest. P. palpebrosus may be one of the World´s most abundant crocodilians.

  11. Process monitoring using kernel density estimation and Bayesian networking with an industrial case study.

    PubMed

    Gonzalez, Ruben; Huang, Biao; Lau, Eric

    2015-09-01

    Principal component analysis has been widely used in the process industries for the purpose of monitoring abnormal behaviour. The process of reducing dimension is obtained through PCA, while T-tests are used to test for abnormality. Some of the main contributions to the success of PCA is its ability to not only detect problems, but to also give some indication as to where these problems are located. However, PCA and the T-test make use of Gaussian assumptions which may not be suitable in process fault detection. A previous modification of this method is the use of independent component analysis (ICA) for dimension reduction combined with kernel density estimation for detecting abnormality; like PCA, this method points out location of the problems based on linear data-driven methods, but without the Gaussian assumptions. Both ICA and PCA, however, suffer from challenges in interpreting results, which can make it difficult to quickly act once a fault has been detected online. This paper proposes the use of Bayesian networks for dimension reduction which allows the use of process knowledge enabling more intelligent dimension reduction and easier interpretation of results. The dimension reduction technique is combined with multivariate kernel density estimation, making this technique effective for non-linear relationships with non-Gaussian variables. The performance of PCA, ICA and Bayesian networks are compared on data from an industrial scale plant. PMID:25930233

  12. A volumetric method for estimation of breast density on digitized screen-film mammograms.

    PubMed

    Pawluczyk, Olga; Augustine, Bindu J; Yaffe, Martin J; Rico, Dan; Yang, Jiwei; Mawdsley, Gordon E; Boyd, Norman F

    2003-03-01

    A method is described for the quantitative volumetric analysis of the mammographic density (VBD) from digitized screen-film mammograms. The method is based on initial calibration of the imaging system with a tissue-equivalent plastic device and the subsequent correction for variations in exposure factors and film processing characteristics through images of an aluminum step wedge placed adjacent to the breast during imaging. From information about the compressed breast thickness and technique factors used for taking the mammogram as well as the information from the calibration device, VBD is calculated. First, optical sensitometry is used to convert images to Log relative exposure. Second, the images are corrected for x-ray field inhomogeneity using a spherical section PMMA phantom image. The effectiveness of using the aluminum step wedge in tracking down the variations in exposure factors and film processing was tested by taking test images of the calibration device, aluminum step wedge and known density phantoms at various exposure conditions and also at different times over one year. Results obtained on known density phantoms show that VBD can be estimated to within 5% accuracy from the actual value. A first order thickness correction is employed to correct for inaccuracy in the compression thickness indicator of the mammography units. Clinical studies are ongoing to evaluate whether VBD can be a better indicator for breast cancer risk. PMID:12674236

  13. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085

  14. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  15. Diurnal and seasonal variability in radial distribution of sap flux density: Implications for estimating stand transpiration.

    PubMed

    Fiora, Alessandro; Cescatti, Alessandro

    2006-09-01

    Daily and seasonal patterns in radial distribution of sap flux density were monitored in six trees differing in social position in a mixed coniferous stand dominated by silver fir (Abies alba Miller) and Norway spruce (Picea abies (L.) Karst) in the Alps of northeastern Italy. Radial distribution of sap flux was measured with arrays of 1-cm-long Granier probes. The radial profiles were either Gaussian or decreased monotonically toward the tree center, and seemed to be related to social position and crown distribution of the trees. The ratio between sap flux estimated with the most external sensor and the mean flux, weighted with the corresponding annulus areas, was used as a correction factor (CF) to express diurnal and seasonal radial variation in sap flow. During sunny days, the diurnal radial profile of sap flux changed with time and accumulated photosynthetic active radiation (PAR), with an increasing contribution of sap flux in the inner sapwood during the day. Seasonally, the contribution of sap flux in the inner xylem increased with daily cumulative PAR and the variation of CF was proportional to the tree diameter, ranging from 29% for suppressed trees up to 300% for dominant trees. Two models were developed, relating CF with PAR and tree diameter at breast height (DBH), to correct daily and seasonal estimates of whole-tree and stand sap flow obtained by assuming uniform sap flux density over the sapwood. If the variability in the radial profile of sap flux density was not accounted for, total stand transpiration would be overestimated by 32% during sunny days and 40% for the entire season.

  16. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation

  17. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation

  18. A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti

    PubMed Central

    Villela, Daniel A. M.; Codeço, Claudia T.; Figueiredo, Felipe; Garcia, Gabriela A.; Maciel-de-Freitas, Rafael; Struchiner, Claudio J.

    2015-01-01

    Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals’ refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80–1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3–5.9). The hierarchical model also performed better than the commonly used Fisher-Ford’s method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and

  19. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

    USGS Publications Warehouse

    Dorazio, Robert M.

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

  20. Comparison of breast percent density estimation from raw versus processed digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  1. Estimate of absolute geostrophic velocity from the density field in the northeastern Pacific Ocean

    SciTech Connect

    Coats, D.A.

    1981-09-20

    A pair of hydrographic sections (35/sup 0/N, 155/sup 0/W) were analyzed to compute absolute velocity by using a variation of the technique by Stommel and Schott (1977). Absolute velocity is determined from an integrated form of the potential vorticity equation by a technique suggested by Davis (1978). This study is the first application of this technique that allows a direct comparison between the uncertainty in estimating a smooth density field and the amount of imbalance in the system of model equations. Because the amount of incompatibility (imbalance) in the system of equations is far smaller than is allowed by the uncertainty in defining the smooth field, the model equation is considered adequate for this set of data. Below 400 m, the nearly constant zonal isopycnal slope indicates that potential vorticity is uniform on isopycnal surfaces. Since the method depends on resolving flow directions from the intersection of isopycnals and surfaces of constant potential vorticity, the absolute velocity is indeterminate in this region. The model equation does, however, constrain the structure of the meridional density field and requires a poleward shift in the latitude which successively deeper isopycnals reach their maximum depth. The fact that this poleward translation can be predicted over several degrees of latitude suggests potential vorticity is uniform over a substantial portion of the North Pacific subtropical gyre. This poleward translation of the density field is an aspect of subtropical density fields, in general, and occurs in conjunction with a translation in the field of geopotential anomaly. It is directly related to the curvature in the deep portion of the beta spiral.

  2. Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.

    PubMed

    Gutiérrez, David; Ramírez-Moreno, Mauricio A

    2016-04-01

    We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.

  3. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  4. The Effect of Quantitative Computed Tomography Acquisition Protocols on Bone Mineral Density Estimation.

    PubMed

    Giambini, Hugo; Dragomir-Daescu, Dan; Huddleston, Paul M; Camp, Jon J; An, Kai-Nan; Nassr, Ahmad

    2015-11-01

    Osteoporosis is characterized by bony material loss and decreased bone strength leading to a significant increase in fracture risk. Patient-specific quantitative computed tomography (QCT) finite element (FE) models may be used to predict fracture under physiological loading. Material properties for the FE models used to predict fracture are obtained by converting grayscale values from the CT into volumetric bone mineral density (vBMD) using calibration phantoms. If there are any variations arising from the CT acquisition protocol, vBMD estimation and material property assignment could be affected, thus, affecting fracture risk prediction. We hypothesized that material property assignments may be dependent on scanning and postprocessing settings including voltage, current, and reconstruction kernel, thus potentially having an effect in fracture risk prediction. A rabbit femur and a standard calibration phantom were imaged by QCT using different protocols. Cortical and cancellous regions were segmented, their average Hounsfield unit (HU) values obtained and converted to vBMD. Estimated vBMD for the cortical and cancellous regions were affected by voltage and kernel but not by current. Our study demonstrated that there exists a significant variation in the estimated vBMD values obtained with different scanning acquisitions. In addition, the large noise differences observed utilizing different scanning parameters could have an important negative effect on small subregions containing fewer voxels. PMID:26355694

  5. Line Transect and Triangulation Surveys Provide Reliable Estimates of the Density of Kloss' Gibbons (Hylobates klossii) on Siberut Island, Indonesia.

    PubMed

    Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias

    2013-02-01

    Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of <66 % for triangulation and <47 % for line transect surveys at the 5 % significance level with a statistical power of 50 %. This case study shows that both methods may provide estimates with similar accuracy but that line transects can result in more precise estimates and allow assessment of other primate species. For a rapid assessment of gibbon density under time and financial constraints, the triangulation method also may be appropriate.

  6. Critical assessment of liquid density estimation methods for multifunctional organic compounds and their use in atmospheric science.

    PubMed

    Barley, Mark H; Topping, David O; McFiggans, Gordon

    2013-04-25

    In order to model the properties and chemical composition of secondary organic aerosol (SOA), estimated physical property data for many thousands of organic compounds are required. Seven methods for estimating liquid density are assessed against experimental data for a test set of 56 multifunctional organic compounds. The group contribution method of Schroeder coupled with the Rackett equation using critical properties by Nannoolal was found to give the best liquid density values for this test set. During this work some problems with the representation of certain groups (aromatic amines and phenols) within the critical property estimation methods were identified, highlighting the importance (and difficulties) of deriving the parameters of group contribution methods from good quality experimental data. A selection of the estimation methods are applied to the 2742 compounds of an atmospheric chemistry mechanism, which showed that they provided consistent liquid density values for compounds with such atmospherically important (but poorly studied) functional groups as hydroperoxide, peroxide, peroxyacid, and PAN. Estimated liquid density values are also presented for a selection of compounds predicted to be important in atmospheric SOA. Hygroscopic growth factor (a property expected to depend on liquid density) has been calculated for a wide range of particle compositions. A low sensitivity of the growth factor to liquid density was found, and a single density value of 1350 kg·m(-3) could be used for all multicomponent SOA in the calculation of growth factors for comparison with experimentally measured values in the laboratory or the field without incurring significant error.

  7. Simulation of Electron Cloud Density Distributions in RHIC Dipoles at Injection and Transition and Estimates for Scrubbing Times

    SciTech Connect

    He,P.; Blaskiewicz, M.; Fischer, W.

    2009-01-02

    In this report we summarize electron-cloud simulations for the RHIC dipole regions at injection and transition to estimate if scrubbing over practical time scales at injection would reduce the electron cloud density at transition to significantly lower values. The lower electron cloud density at transition will allow for an increase in the ion intensity.

  8. Estimating the mass of Saturn's B ring

    NASA Astrophysics Data System (ADS)

    Hedman, Matthew M.; Nicholson, Philip D.

    2016-10-01

    The B ring is the brightest and most opaque of Saturn's rings, but it is also amongst the least well understood because basic parameters like its surface mass density have been poorly constrained. Elsewhere in the rings, spiral density waves driven by resonances with Saturn's various moons provide precise and robust mass density estimates, but for most the B ring extremely high opacities and strong stochastic optical depth variations obscure the signal from these wave patterns. We have developed a new wavelet-based technique that combines data from multiple stellar occultations (observed by the Visual and Infrared Mapping Spectrometer instrument onboard the Cassini spacecraft) that has allowed us to identify signals that appear to be due to waves generated by the strongest resonances in the central and outer B ring. These wave signatures yield new estimates of the B-ring's mass density and indicate that the B-ring's total mass could be quite low, between 1/3 and 2/3 the mass of Saturn's moon Mimas.

  9. Daniell method for power spectral density estimation in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Labuda, Aleksander

    2016-03-01

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to a more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.

  10. Feasibility of hydrogen density estimation from tomographic sensing of Lyman alpha emission

    NASA Astrophysics Data System (ADS)

    Waldrop, L.; Kamalabadi, F.; Ren, D.

    2015-12-01

    In this work, we describe the scientific motivation, basic principles, and feasibility of a new approach to the estimation of neutral hydrogen (H) density in the terrestrial exosphere based on the 3-D tomographic sensing of optically thin H emission at 121.6 nm (Lyman alpha). In contrast to existing techniques, Lyman alpha tomography allows for model-independent reconstruction of the underlying H distribution in support of investigations regarding the origin and time-dependent evolution of exospheric structure. We quantitatively describe the trade-off space between the measurement sampling rate, viewing geometry, and the spatial and temporal resolution of the reconstruction that is supported by the data. We demonstrate that this approach is feasible from either earth-orbiting satellites such as the stereoscopic NASA TWINS mission or from a CubeSat platform along a trans-exosphere trajectory such as that enabled by the upcoming Exploration Mission 1 launch.

  11. Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

    NASA Astrophysics Data System (ADS)

    Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

    2013-12-01

    The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

  12. Wavelet-based multifractal analysis of earthquakes temporal distribution in Mammoth Mountain volcano, Mono County, Eastern California

    NASA Astrophysics Data System (ADS)

    Zamani, Ahmad; Kolahi Azar, Amir; Safavi, Ali

    2014-06-01

    This paper presents a wavelet-based multifractal approach to characterize the statistical properties of temporal distribution of the 1982-2012 seismic activity in Mammoth Mountain volcano. The fractal analysis of time-occurrence series of seismicity has been carried out in relation to seismic swarm in association with magmatic intrusion happening beneath the volcano on 4 May 1989. We used the wavelet transform modulus maxima based multifractal formalism to get the multifractal characteristics of seismicity before, during, and after the unrest. The results revealed that the earthquake sequences across the study area show time-scaling features. It is clearly perceived that the multifractal characteristics are not constant in different periods and there are differences among the seismicity sequences. The attributes of singularity spectrum have been utilized to determine the complexity of seismicity for each period. Findings show that the temporal distribution of earthquakes for swarm period was simpler with respect to pre- and post-swarm periods.

  13. A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

    NASA Astrophysics Data System (ADS)

    Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

    2014-09-01

    This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

  14. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  15. A wavelet-based single-view reconstruction approach for cone beam x-ray luminescence tomography imaging

    PubMed Central

    Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing

    2014-01-01

    Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315

  16. Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion

    NASA Technical Reports Server (NTRS)

    Wilson, L.; Self, S.

    1980-01-01

    Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.

  17. Density estimates of Panamanian owl monkeys (Aotus zonalis) in three habitat types.

    PubMed

    Svensson, Magdalena S; Samudio, Rafael; Bearder, Simon K; Nekaris, K Anne-Isola

    2010-02-01

    The resolution of the ambiguity surrounding the taxonomy of Aotus means data on newly classified species are urgently needed for conservation efforts. We conducted a study on the Panamanian owl monkey (Aotus zonalis) between May and July 2008 at three localities in Chagres National Park, located east of the Panama Canal, using the line transect method to quantify abundance and distribution. Vegetation surveys were also conducted to provide a baseline quantification of the three habitat types. We observed 33 individuals within 16 groups in two out of the three sites. Population density was highest in Campo Chagres with 19.7 individuals/km(2) and intermediate densities of 14.3 individuals/km(2) were observed at Cerro Azul. In la Llana A. zonalis was not found to be present. The presence of A. zonalis in Chagres National Park, albeit at seemingly low abundance, is encouraging. A longer-term study will be necessary to validate the further abundance estimates gained in this pilot study in order to make conservation policy decisions. PMID:19852005

  18. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    SciTech Connect

    Bandyopadhyay, M. Sudhir, Dass Chakraborty, A.

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.

  19. Estimation of D-region Electron Density using Tweeks Measurements at Nainital and Allahabad

    NASA Astrophysics Data System (ADS)

    Pant, P.; Maurya, A. K.; Singh, Rajesh; Veenadhari, B.; Singh, A. K.

    2010-10-01

    Lightning generated radio atmospheric that propagates over long distances via multiple reflections through the boundaries of the Earth-ionosphere waveguide (EIWG), shows sharp dispersion near the cut-off frequency ˜1.8 kHz of the EIWG. These dispersed atmospherics at lower frequency end are called as `tweek' radio atmospherics. In order to estimate D-region electron densities at the ionospheric reflection heights we have utilized the dispersive property of tweeks observed at low latitude Indian stations Nainital (Geomag. Lat., 20.29° N) and Allahabad (Geomag. Lat., 16.05° N). Direction finding technique has also been applied to determine the source locations of causative lightning discharge of tweeks. In this perspective, the geographic locations is determined by the intersection of two circles that are drawn by taking the travelled / propagation distance by tweek atmospherics from source location to Allahabad (ALD) and Nainital (NTL) stations. These results are in good agreement with World Wide Lightning Location Network (WWLLN) data. The average D-region electron density along the propagation path varied in the range ˜20-35 el/cc at ionospheric reflection heights of 70-90 km. The tweek method has unique advantage of monitoring lower boundary of the D-region over an area of several thousand of km surrounding to the receiving stations.

  20. Density estimation in aerial images of large crowds for automatic people counting

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Metzler, Juergen

    2013-05-01

    Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

  1. Robust estimation of mammographic breast density: a patient-based approach

    NASA Astrophysics Data System (ADS)

    Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas

    2012-02-01

    Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).

  2. Evaluation of a brushing machine for estimating density of spider mites on grape leaves.

    PubMed

    Macmillan, Craig D; Costello, Michael J

    2015-12-01

    Direct visual inspection and enumeration for estimating field population density of economically important arthropods, such as spider mites, provide more information than alternative methods, such as binomial sampling, but is laborious and time consuming. A brushing machine can reduce sampling time and perhaps improve accuracy. Although brushing technology has been investigated and recommended as a useful tool for researchers and integrated pest management practitioners, little work to demonstrate the validity of this technique has been performed since the 1950's. We investigated the brushing machine manufactured by Leedom Enterprises (Mi-Wuk Village, CA, USA) for studies on spider mites. We evaluated (1) the mite recovery efficiency relative to the number of passes of a leaf through the brushes, (2) mite counts as generated by the machine compared to visual counts under a microscope, (3) the lateral distribution of mites on the collection plate and (4) the accuracy and precision of a 10% sub-sample using a double-transect counting grid. We found that about 95% of mites on a leaf were recovered after five passes, and 99% after nine passes, and mite counts from brushing were consistently higher than those from visual inspection. Lateral distribution of mites was not uniform, being highest in concentration at the center and lowest at the periphery. The 10% double-transect pattern did not result in a significant correlation with the total plate count at low mite density, but accuracy and precision improved at medium and high density. We suggest that a more accurate and precise sample may be achieved using a modified pattern which concentrates on the center plus some of the adjacent area. PMID:26459377

  3. Evaluation of a brushing machine for estimating density of spider mites on grape leaves.

    PubMed

    Macmillan, Craig D; Costello, Michael J

    2015-12-01

    Direct visual inspection and enumeration for estimating field population density of economically important arthropods, such as spider mites, provide more information than alternative methods, such as binomial sampling, but is laborious and time consuming. A brushing machine can reduce sampling time and perhaps improve accuracy. Although brushing technology has been investigated and recommended as a useful tool for researchers and integrated pest management practitioners, little work to demonstrate the validity of this technique has been performed since the 1950's. We investigated the brushing machine manufactured by Leedom Enterprises (Mi-Wuk Village, CA, USA) for studies on spider mites. We evaluated (1) the mite recovery efficiency relative to the number of passes of a leaf through the brushes, (2) mite counts as generated by the machine compared to visual counts under a microscope, (3) the lateral distribution of mites on the collection plate and (4) the accuracy and precision of a 10% sub-sample using a double-transect counting grid. We found that about 95% of mites on a leaf were recovered after five passes, and 99% after nine passes, and mite counts from brushing were consistently higher than those from visual inspection. Lateral distribution of mites was not uniform, being highest in concentration at the center and lowest at the periphery. The 10% double-transect pattern did not result in a significant correlation with the total plate count at low mite density, but accuracy and precision improved at medium and high density. We suggest that a more accurate and precise sample may be achieved using a modified pattern which concentrates on the center plus some of the adjacent area.

  4. The estimation of the constituent densities of the upper atmosphere by means of a recursive filtering algorithm.

    NASA Technical Reports Server (NTRS)

    Mcgarty, T. P.

    1971-01-01

    The structure of the upper atmosphere can be indirectly probed by light in order to determine the global density structure of ozone, aerosols, and neutral atmosphere. Scattered and directly transmitted light is measured by a satellite and is shown to be a nonlinear function of the state which is defined to be a point-wise decomposition of the density profiles. Dynamics are imposed on the state vector and a structured estimation problem is developed. The estimation of these densities is then performed using a linearized Kalman-Bucy filter and a linearized Kushner-Stratonovich filter.

  5. mBEEF: An accurate semi-local Bayesian error estimation density functional

    NASA Astrophysics Data System (ADS)

    Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

    2014-04-01

    We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

  6. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  7. Using kernel density estimation to understand the influence of neighbourhood destinations on BMI

    PubMed Central

    King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M

    2016-01-01

    Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity

  8. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A.; Bobrek, M.

    2016-11-01

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  9. On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

    2015-02-01

    in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

  10. Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

    NASA Technical Reports Server (NTRS)

    Grau, David

    2012-01-01

    This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure

  11. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  12. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  13. Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites

    PubMed Central

    Mitchard, Edward T A; Feldpausch, Ted R; Brienen, Roel J W; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R; Lewis, Simon L; Lloyd, Jon; Quesada, Carlos A; Gloor, Manuel; ter Steege, Hans; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragão, Luiz E O C; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I; Cerón, Carlos E; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R C; Di Fiore, Anthony; Domingues, Tomas F; Erwin, Terry L; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N; Killeen, Tim J; Laurance, William F; Levis, Carolina; Magnusson, William E; Marimon, Beatriz S; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T; Neill, David; Núñez Vargas, Mario P; Palacios, Walter A; Parada, Alexander; Pardo Molina, Guido; Peña-Claros, Marielos; Pitman, Nigel; Peres, Carlos A; Poorter, Lourens; Prieto, Adriana; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H; Rudas, Agustin; Salomão, Rafael P; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F; Steininger, Marc K; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R; van der Heijden, Geertje M F; Vieira, Ima C G; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A; Wang, Ophelia; Zartman, Charles E; Malhi, Yadvinder; Phillips, Oliver L

    2014-01-01

    Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass directly, and thus may miss significant spatial variations in forest structure. We test the stated accuracy of pantropical carbon maps using a large independent field dataset. Location Tropical forests of the Amazon basin. The permanent archive of the field plot data can be accessed at: http://dx.doi.org/10.5521/FORESTPLOTS.NET/2014_1 Methods Two recent pantropical RS maps of vegetation carbon are compared to a unique ground-plot dataset, involving tree measurements in 413 large inventory plots located in nine countries. The RS maps were compared directly to field plots, and kriging of the field data was used to allow area-based comparisons. Results The two RS carbon maps fail to capture the main gradient in Amazon forest carbon detected using 413 ground plots, from the densely wooded tall forests of the north-east, to the light-wooded, shorter forests of the south-west. The differences between plots and RS maps far exceed the uncertainties given in these studies, with whole regions over- or under-estimated by > 25%, whereas regional uncertainties for the maps were reported to be < 5%. Main conclusions Pantropical biomass maps are widely used by governments and by projects aiming to reduce deforestation using carbon offsets, but may have significant regional biases. Carbon-mapping techniques must be revised to account for the known ecological variation in tree wood density and allometry to create maps suitable for carbon accounting. The use of single relationships between tree canopy height and above-ground biomass inevitably yields large, spatially correlated errors. This presents a significant challenge to both the forest conservation and remote sensing communities

  14. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    PubMed

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609

  15. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    PubMed

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems.

  16. Use of forest inventories and geographic information systems to estimate biomass density of tropical forests: Application to tropical Africa.

    PubMed

    Brown, S; Gaston, G

    1995-01-01

    One of the most important databases needed for estimating emissions of carbon dioxide resulting from changes in the cover, use, and management of tropical forests is the total quantity of biomass per unit area, referred to as biomass density. Forest inventories have been shown to be valuable sources of data for estimating biomass density, but inventories for the tropics are few in number and their quality is poor. This lack of reliable data has been overcome by use of a promising approach that produces geographically referenced estimates by modeling in a geographic information system (GIS). This approach has been used to produce geographically referenced, spatial distributions of potential and actual (circa 1980) aboveground biomass density of all forests types in tropical Africa. Potential and actual biomass density estimates ranged from 33 to 412 Mg ha(-1) (10(6)g ha(-1)) and 20 to 299 Mg ha(-1), respectively, for very dry lowland to moist lowland forests and from 78 to 197 Mg ha(-1) and 37 to 105 Mg ha(-1), respectively, for montane-seasonal to montane-moist forests. Of the 37 countries included in this study, more than half (51%) contained forests that had less than 60% of their potential biomass. Actual biomass density for forest vegetation was lowest in Botswana, Niger, Somalia, and Zimbabwe (about 10 to 15 Mg ha(-1)). Highest estimates for actual biomass density were found in Congo, Equatorial Guinea, Gabon, and Liberia (305 to 344 Mg ha(-1)). Results from this research effort can contribute to reducing uncertainty in the inventory of country-level emission by providing consistent estimates of biomass density at subnational scales that can be used with other similarly scaled databases on change in land cover and use.

  17. The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions

    USGS Publications Warehouse

    Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

    2002-01-01

    The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

  18. Estimated uncertainty of calculated liquefied natural gas density from a comparison of NBS and Gaz de France densimeter test facilities

    SciTech Connect

    Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

    1982-12-16

    Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

  19. Novelty detection by multivariate kernel density estimation and growing neural gas algorithm

    NASA Astrophysics Data System (ADS)

    Fink, Olga; Zio, Enrico; Weidmann, Ulrich

    2015-01-01

    One of the underlying assumptions when using data-based methods for pattern recognition in diagnostics or prognostics is that the selected data sample used to train and test the algorithm is representative of the entire dataset and covers all combinations of parameters and conditions, and resulting system states. However in practice, operating and environmental conditions may change, unexpected and previously unanticipated events may occur and corresponding new anomalous patterns develop. Therefore for practical applications, techniques are required to detect novelties in patterns and give confidence to the user on the validity of the performed diagnosis and predictions. In this paper, the application of two types of novelty detection approaches is compared: a statistical approach based on multivariate kernel density estimation and an approach based on a type of unsupervised artificial neural network, called the growing neural gas (GNG). The comparison is performed on a case study in the field of railway turnout systems. Both approaches demonstrate their suitability for detecting novel patterns. Furthermore, GNG proves to be more flexible, especially with respect to dimensionality of the input data and suitability for online learning.

  20. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  1. HIRDLS observations of global gravity wave absolute momentum fluxes: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    John, Sherine Rachel; Kishore Kumar, Karanam

    2016-02-01

    Using wavelet technique for detection of height varying vertical and horizontal wavelengths of gravity waves, the absolute values of gravity wave momentum fluxes are estimated from High Resolution Dynamics Limb Sounder (HIRDLS) temperature measurements. Two years of temperature measurements (2005 December-2007 November) from HIRDLS onboard EOS-Aura satellite over the globe are used for this purpose. The least square fitting method is employed to extract the 0-6 zonal wavenumber planetary wave amplitudes, which are removed from the instantaneous temperature profiles to extract gravity wave fields. The vertical and horizontal wavelengths of the prominent waves are computed using wavelet and cross correlation techniques respectively. The absolute momentum fluxes are then estimated using prominent gravity wave perturbations and their vertical and horizontal wavelengths. The momentum fluxes obtained from HIRDLS are compared with the fluxes obtained from ground based Rayleigh LIDAR observations over a low latitude station, Gadanki (13.5°N, 79.2°E) and are found to be in good agreement. After validation, the absolute gravity wave momentum fluxes over the entire globe are estimated. It is found that the winter hemisphere has the maximum momentum flux magnitudes over the high latitudes with a secondary maximum over the summer hemispheric low-latitudes. The significance of the present study lies in introducing the wavelet technique for estimating the height varying vertical and horizontal wavelengths of gravity waves and validating space based momentum flux estimations using ground based lidar observations.

  2. Exploration of diffusion kernel density estimation in agricultural drought risk analysis: a case study in Shandong, China

    NASA Astrophysics Data System (ADS)

    Chen, W.; Shao, Z.; Tiong, L. K.

    2015-11-01

    Drought caused the most widespread damage in China, making up over 50 % of the total affected area nationwide in recent decades. In the paper, a Standardized Precipitation Index-based (SPI-based) drought risk study is conducted using historical rainfall data of 19 weather stations in Shandong province, China. Kernel density based method is adopted to carry out the risk analysis. Comparison between the bivariate Gaussian kernel density estimation (GKDE) and diffusion kernel density estimation (DKDE) are carried out to analyze the effect of drought intensity and drought duration. The results show that DKDE is relatively more accurate without boundary-leakage. Combined with the GIS technique, the drought risk is presented which reveals the spatial and temporal variation of agricultural droughts for corn in Shandong. The estimation provides a different way to study the occurrence frequency and severity of drought risk from multiple perspectives.

  3. Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

    2012-12-01

    We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia

  4. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest

    PubMed Central

    Hearn, Andrew J.; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T. B.; Macdonald, David W.

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219

  5. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest.

    PubMed

    Hearn, Andrew J; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T B; Macdonald, David W

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches.

  6. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest.

    PubMed

    Hearn, Andrew J; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T B; Macdonald, David W

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219

  7. Estimating the cell density and invasive radius of three-dimensional glioblastoma tumor spheroids grown in vitro

    NASA Astrophysics Data System (ADS)

    Stein, Andrew M.; Nowicki, Michal O.; Demuth, Tim; Berens, Michael E.; Lawler, Sean E.; Chiocca, E. Antonio; Sander, Leonard M.

    2007-08-01

    To gain insight into brain tumor invasion, experiments are conducted on multicellular tumor spheroids grown in collagen gel. Typically, a radius of invasion is reported, which is obtained by human measurement. We present a simple, heuristic algorithm for automated invasive radii estimation (AIRE) that uses local fluctuations of the image intensity. We then derive an analytical expression relating the image graininess to the cell density for a model imaging system. The result agrees with the experiment up to a multiplicative constant and thus describes a novel method for estimating the cell density from bright-field images.

  8. Estimation of tiger densities in the tropical dry forests of Panna, Central India, using photographic capture-recapture sampling

    USGS Publications Warehouse

    Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.

    2004-01-01

    Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

  9. Estimation of tiger densities in the tropical dry forests of Panna, Central India, using photographic capture-recapture sampling

    USGS Publications Warehouse

    Karanth, K.Ullas; Chundawat, Raghunandan S.; Nichols, James D.; Kumar, N. Samba

    2004-01-01

    Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture–recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture–recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, p̂= 0.04, resulted in an estimated tiger population size and standard error (N̂(SÊN̂)) of 29 (9.65), and a density (D̂(SÊD̂)) of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

  10. Estimation of ocelot density in the pantanal using capture-recapture analysis of camera-trapping data

    USGS Publications Warehouse

    Trolle, M.; Kery, M.

    2003-01-01

    Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

  11. Data classification with radial basis function networks based on a novel kernel density estimation algorithm.

    PubMed

    Oyang, Yen-Jen; Hwang, Shien-Ching; Ou, Yu-Yen; Chen, Chien-Yu; Chen, Zhi-Wei

    2005-01-01

    This paper presents a novel learning algorithm for efficient construction of the radial basis function (RBF) networks that can deliver the same level of accuracy as the support vector machines (SVMs) in data classification applications. The proposed learning algorithm works by constructing one RBF subnetwork to approximate the probability density function of each class of objects in the training data set. With respect to algorithm design, the main distinction of the proposed learning algorithm is the novel kernel density estimation algorithm that features an average time complexity of O(n log n), where n is the number of samples in the training data set. One important advantage of the proposed learning algorithm, in comparison with the SVM, is that the proposed learning algorithm generally takes far less time to construct a data classifier with an optimized parameter setting. This feature is of significance for many contemporary applications, in particular, for those applications in which new objects are continuously added into an already large database. Another desirable feature of the proposed learning algorithm is that the RBF networks constructed are capable of carrying out data classification with more than two classes of objects in one single run. In other words, unlike with the SVM, there is no need to resort to mechanisms such as one-against-one or one-against-all for handling datasets with more than two classes of objects. The comparison with SVM is of particular interest, because it has been shown in a number of recent studies that SVM generally are able to deliver higher classification accuracy than the other existing data classification algorithms. As the proposed learning algorithm is instance-based, the data reduction issue is also addressed in this paper. One interesting observation in this regard is that, for all three data sets used in data reduction experiments, the number of training samples remaining after a naive data reduction mechanism is

  12. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    PubMed Central

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD <2.0 Å), the DPM-HMM method performs as well or better than the best templates, demonstrating that our automated method recaptures these canonical loops without inclusion of any IgG specific terms or manual intervention. In cases with poor or few good templates (mean RMSD >7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http

  13. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    PubMed Central

    2010-01-01

    Background MicroRNAs (miRNAs) are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs) over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM) are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE) based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G2DE based predictor. PMID

  14. Using a kernel density estimation based classifier to predict species-specific microRNA precursors

    PubMed Central

    Chang, Darby Tien-Hao; Wang, Chih-Ching; Chen, Jian-Wei

    2008-01-01

    Background MicroRNAs (miRNAs) are short non-coding RNA molecules participating in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs) over the years. Recently, ab initio approaches obtain more attention because that they can discover species-specific pre-miRNAs. Most ab initio approaches proposed novel features to characterize RNA molecules. However, there were fewer discussions on the associated classification mechanism in a miRNA predictor. Results This study focuses on the classification algorithm for miRNA prediction. We develop a novel ab initio method, miR-KDE, in which most of the features are collected from previous works. The classification mechanism in miR-KDE is the relaxed variable kernel density estimator (RVKDE) that we have recently proposed. When compared to the famous support vector machine (SVM), RVKDE exploits more local information of the training dataset. MiR-KDE is evaluated using a training set consisted of only human pre-miRNAs to predict a benchmark collected from 40 species. The experimental results show that miR-KDE delivers favorable performance in predicting human pre-miRNAs and has advantages for pre-miRNAs from the genera taxonomically distant to humans. Conclusion We use a novel classifier of which the characteristic of exploiting local information is particularly suitable to predict species-specific pre-miRNAs. This study also provides a comprehensive analysis from the view of classification mechanism. The good performance of miR-KDE encourages more efforts on the classification methodology as well as the feature extraction in miRNA prediction. PMID:19091019

  15. Estimating stand structure using discrete-return lidar: an example from low density, fire prone ponderosa pine forests

    USGS Publications Warehouse

    Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.

    2005-01-01

    The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important

  16. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  17. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  18. Detection of Dendritic Spines Using Wavelet-Based Conditional Symmetric Analysis and Regularized Morphological Shared-Weight Neural Networks

    PubMed Central

    Wang, Shuihua; Chen, Mengmeng; Li, Yang; Zhang, Yudong; Han, Liangxiu; Wu, Jane; Du, Sidan

    2015-01-01

    Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer's disease, Parkinson's diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines. PMID:26692046

  19. Analysis of hydrological trend for radioactivity content in bore-hole water samples using wavelet based denoising.

    PubMed

    Paul, Sabyasachi; Suman, V; Sarkar, P K; Ranade, A K; Pulhani, V; Dafauti, S; Datta, D

    2013-08-01

    A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in (137)Cs and (90)Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann-Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements.

  20. Wavelet based error correction and predictive uncertainty of a hydrological forecasting system

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Pappenberger, Florian; Thielen, Jutta; de Roo, Ad

    2010-05-01

    River discharge predictions most often show errors with scaling properties of unknown source and statistical structure that degrade the quality of forecasts. This is especially true for lead-time ranges greater then a few days. Since the European Flood Alert System (EFAS) provides discharge forecasts up to ten days ahead, it is necessary to take these scaling properties into consideration. For example the range of scales for the error that occurs at the spring time will be caused by long lasting snowmelt processes, and is by far larger then the error, that appears during the summer period and is caused by convective rain fields of short duration. The wavelet decomposition is an excellent way to provide the detailed model error at different levels in order to estimate the (unobserved) state variables more precisely. A Vector-AutoRegressive model with eXogenous input (VARX) is fitted for the different levels of wavelet decomposition simultaneously and after predicting the next time steps ahead for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The Bayesian Uncertainty Processor (BUP) developed by Krzysztofowicz is an efficient method to estimate the full predictive uncertainty, which is derived by integrating the hydrological model uncertainty and the meteorological input uncertainty. A hydrological uncertainty processor has been applied to the error corrected discharge series at first in order to derive the predictive conditional distribution under the hypothesis that there is no input uncertainty. The uncertainty of the forecasted meteorological input forcing the hydrological model is derived from the combination of deterministic weather forecasts and ensemble predictions systems (EPS) and the Input Processor maps this input uncertainty into the output uncertainty under the hypothesis that there is no hydrological uncertainty. The main objective of this Bayesian forecasting system

  1. Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation

    PubMed Central

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  2. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  3. A wavelet-based metric for visual texture discrimination with applications in evolutionary ecology.

    PubMed

    Kiltie, R A; Fan, J; Laine, A F

    1995-03-01

    Much work on natural and sexual selection is concerned with the conspicuousness of visual patterns (textures) on animal and plant surfaces. Previous attempts by evolutionary biologists to quantify apparency of such textures have involved subjective estimates of conspicuousness or statistical analyses based on transect samples. We present a method based on wavelet analysis that avoids subjectivity and that uses more of the information in image textures than transects do. Like the human visual system for texture discrimination, and probably like that of other vertebrates, this method is based on localized analysis of orientation and frequency components of the patterns composing visual textures. As examples of the metric's utility, we present analyses of crypsis for tigers, zebras, and peppered moth morphs.

  4. Primates in Human-Modified and Fragmented Landscapes: The Conservation Relevance of Modelling Habitat and Disturbance Factors in Density Estimation

    PubMed Central

    Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco

    2016-01-01

    Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, ‘canonical’ density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891

  5. Primates in Human-Modified and Fragmented Landscapes: The Conservation Relevance of Modelling Habitat and Disturbance Factors in Density Estimation.

    PubMed

    Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco

    2016-01-01

    Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, 'canonical' density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891

  6. Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection

    PubMed Central

    2012-01-01

    Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG) more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1) the area between QRS offset and T-peak points, 2) the normalized and signed sum from QRS offset to effective zero voltage point, and 3) the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE) and support vector machine (SVM) methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical values of the parameters

  7. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  8. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  9. Wavelet-based methods for the nonlinear inverse scattering problem using the extended Born approximation

    NASA Astrophysics Data System (ADS)

    Miller, Eric L.; Willsky, Alan S.

    1996-01-01

    In this paper, we present an approach to the nonlinear inverse scattering problem using the extended Born approximation (EBA) on the basis of methods from the fields of multiscale and statistical signal processing. By posing the problem directly in the wavelet transform domain, regularization is provided through the use of a multiscale prior statistical model. Using the maximum a posteriori (MAP) framework, we introduce the relative Cramér-Rao bound (RCRB) as a tool for analyzing the level of detail in a reconstruction supported by a data set as a function of the physics, the source-receiver geometry, and the nature of our prior information. The MAP estimate is determined using a novel implementation of the Levenberg-Marquardt algorithm in which the RCRB is used to achieve a substantial reduction in the effective dimensionality of the inversion problem with minimal degradation in performance. Additional reduction in complexity is achieved by taking advantage of the sparse structure of the matrices defining the EBA in scale space. An inverse electrical conductivity problem arising in geophysical prospecting applications provides the vehicle for demonstrating the analysis and algorithmic techniques developed in this paper.

  10. sEMG wavelet-based indices predicts muscle power loss during dynamic contractions.

    PubMed

    González-Izal, M; Rodríguez-Carreño, I; Malanda, A; Mallor-Giménez, F; Navarro-Amézqueta, I; Gorostiaga, E M; Izquierdo, M

    2010-12-01

    The purpose of this study was to investigate the sensitivity of new surface electromyography (sEMG) indices based on the discrete wavelet transform to estimate acute exercise-induced changes on muscle power output during a dynamic fatiguing protocol. Fifteen trained subjects performed five sets consisting of 10 leg press, with 2 min rest between sets. sEMG was recorded from vastus medialis (VM) muscle. Several surface electromyographic parameters were computed. These were: mean rectified voltage (MRV), median spectral frequency (F(med)), Dimitrov spectral index of muscle fatigue (FI(nsm5)), as well as five other parameters obtained from the stationary wavelet transform (SWT) as ratios between different scales. The new wavelet indices showed better accuracy to map changes in muscle power output during the fatiguing protocol. Moreover, the new wavelet indices as a single parameter predictor accounted for 46.6% of the performance variance of changes in muscle power and the log-FI(nsm5) and MRV as a two-factor combination predictor accounted for 49.8%. On the other hand, the new wavelet indices proposed, showed the highest robustness in presence of additive white Gaussian noise for different signal to noise ratios (SNRs). The sEMG wavelet indices proposed may be a useful tool to map changes in muscle power output during dynamic high-loading fatiguing task.

  11. A field comparison of nested grid and trapping web density estimators

    USGS Publications Warehouse

    Jett, D.A.; Nichols, J.D.

    1987-01-01

    The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.

  12. Power spectral density estimation by spline smoothing in the frequency domain

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.

  13. Power spectral density estimation by spline smoothing in the frequency domain.

    NASA Technical Reports Server (NTRS)

    De Figueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-

  14. Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range

    NASA Technical Reports Server (NTRS)

    Krupenio, N. N.

    1977-01-01

    The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.

  15. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  16. Wavelet-based automatic determination of the P- and S-wave arrivals

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.

    2013-12-01

    The detection of P- and S-wave arrivals is important for a variety of seismological applications including earthquake detection and characterization, and seismic tomography problems such as imaging of hydrocarbon reservoirs. For many years, dedicated human-analysts manually selected the arrival times of P and S waves. However, with the rapid expansion of seismic instrumentation, automatic techniques that can process a large number of seismic traces are becoming essential in tomographic applications, and for earthquake early-warning systems. In this work, we present a pair of algorithms for efficient picking of P and S onset times. The algorithms are based on the continuous wavelet transform of the seismic waveform that allows examination of a signal in both time and frequency domains. Unlike Fourier transform, the basis functions are localized in time and frequency, therefore, wavelet decomposition is suitable for analysis of non-stationary signals. For detecting the P-wave arrival, the wavelet coefficients are calculated using the vertical component of the seismogram, and the onset time of the wave is identified. In the case of the S-wave arrival, we take advantage of the polarization of the shear waves, and cross-examine the wavelet coefficients from the two horizontal components. In addition to the onset times, the automatic picking program provides estimates of uncertainty, which are important for subsequent applications. The algorithms are tested with synthetic data that are generated to include sudden changes in amplitude, frequency, and phase. The performance of the wavelet approach is further evaluated using real data by comparing the automatic picks with manual picks. Our results suggest that the proposed algorithms provide robust measurements that are comparable to manual picks for both P- and S-wave arrivals.

  17. Exospheric hydrogen density estimates from GOES solar Lyman-alpha measurements

    NASA Astrophysics Data System (ADS)

    Machol, Janet; Loto'aniu, Paul; Snow, Martin; Viereck, Rodney; Woodraska, Donald; Redmon, Robert

    2016-04-01

    We use extreme ultraviolet (EUV) measurements of solar irradiance from GOES satellites to derive daily hydrogen (H) density distributions of the terrestrial upper atmosphere. GOES satellites are in geostationary orbit and measure solar irradiance in a wavelength band around the Lyman-alpha line. When the satellite is on the night-side of the Earth looking through the atmosphere at the Sun, the measured irradiance is decreased by scattering by H in the upper atmosphere. Using these daily dips in the measured irradiance, we derive a simple H density distribution for the exosphere. We compare preliminary results from this technique with H density distributions derived from other data sets. Continues GOES observations will be available for many years into the future and potentially can provide exospheric H densities for use in whole atmosphere, ring current, and satellite drag models. Long-term observations of trends can be used to monitor impacts of climate change.

  18. Estimates of volumetric bone density from projectional measurements improve the discriminatory capability of dual X-ray absorptiometry

    NASA Technical Reports Server (NTRS)

    Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

    1995-01-01

    To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are

  19. Density-dependent analysis of nonequilibrium paths improves free energy estimates II. A Feynman-Kac formalism.

    PubMed

    Minh, David D L; Vaikuntanathan, Suriyanarayanan

    2011-01-21

    The nonequilibrium fluctuation theorems have paved the way for estimating equilibrium thermodynamic properties, such as free energy differences, using trajectories from driven nonequilibrium processes. While many statistical estimators may be derived from these identities, some are more efficient than others. It has recently been suggested that trajectories sampled using a particular time-dependent protocol for perturbing the Hamiltonian may be analyzed with another one. Choosing an analysis protocol based on the nonequilibrium density was empirically demonstrated to reduce the variance and bias of free energy estimates. Here, we present an alternate mathematical formalism for protocol postprocessing based on the Feynmac-Kac theorem. The estimator that results from this formalism is demonstrated on a few low-dimensional model systems. It is found to have reduced bias compared to both the standard form of Jarzynski's equality and the previous protocol postprocessing formalism.

  20. Wavelet-based filter methods for the detection of small transiting planets: Application to Kepler and K2 light curves

    NASA Astrophysics Data System (ADS)

    Grziwa, Sascha; Korth, Judith; Paetzold, Martin; KEST

    2016-10-01

    The Rheinisches Institut für Umweltforschung (RIU-PF) has developed the software package EXOTRANS for the detection of transits of exoplanets in stellar light curves. This software package was in use during the CoRoT space mission (2006-2013). EXOTRANS was improved by different wavelet-based filter methods during the following years to separate stellar variation, orbital disturbances and instrumental effects from stellar light curves taken by space telescopes (Kepler, K2, TESS and PLATO). The VARLET filter separates faint transit signals from stellar variations without using a-priori information about the target star. VARLET considers variations by frequency, amplitude and shape simultaneously. VARLET is also able to extract most instrumental jumps and glitches. The PHALET filter separates periodic features independent of their shape and is used with the intention to separate diluting stellar binaries. It is also applied for the multi transit search. Stellar light curves of the K2 mission are constructed from the processing of target pixel files which corrects disturbances caused by the reduced pointing precision of the Kepler telescope after the failure of two gyroscopes. The combination of target pixel file processing with both filter techniques and the proven detection pipeline EXOTRANS lowers the detection limit, reduces false alarms and simplifies the detection of faint transits in light curves of the K2 mission. Using EXOTRANS many new candidates were detected in K2 light curves by using EXOTRANS which were successfully confirmed by ground-based follow-up observation of the KEST collaboration. New candidates and confirmed planets are presented.

  1. Probabilistic Estimates of Number of Undiscovered Deposits and Their Total Tonnages in Permissive Tracts Using Deposit Densities

    USGS Publications Warehouse

    Singer, D.A.; Kouda, R.

    2011-01-01

    Empirical evidence indicates that processes affecting number and quantity of resources in geologic settings are very general across deposit types. Sizes of permissive tracts that geologically could contain the deposits are excellent predictors of numbers of deposits. In addition, total ore tonnage of mineral deposits of a particular type in a tract is proportional to the type's median tonnage in a tract. Regressions using size of permissive tracts and median tonnage allow estimation of number of deposits and of total tonnage of mineralization. These powerful estimators, based on 10 different deposit types from 109 permissive worldwide control tracts, generalize across deposit types. Estimates of number of deposits and of total tonnage of mineral deposits are made by regressing permissive area, and mean (in logs) tons in deposits of the type, against number of deposits and total tonnage of deposits in the tract for the 50th percentile estimates. The regression equations (R2=0.91 and 0.95) can be used for all deposit types just by inserting logarithmic values of permissive area in square kilometers, and mean tons in deposits in millions of metric tons. The regression equations provide estimates at the 50th percentile, and other equations are provided for 90% confidence limits for lower estimates and 10% confidence limits for upper estimates of number of deposits and total tonnage. Equations for these percentile estimates along with expected value estimates are presented here along with comparisons with independent expert estimates. Also provided are the equations for correcting for the known well-explored deposits in a tract. These deposit-density models require internally consistent grade and tonnage models and delineations for arriving at unbiased estimates. ?? 2011 International Association for Mathematical Geology (outside the USA).

  2. Estimating the density of honeybee colonies across their natural range to fill the gap in pollinator decline censuses.

    PubMed

    Jaffé, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Rúa, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A

    2010-04-01

    Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation.

  3. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  4. Inverse estimation of parameters for multidomain flow models in soil columns with different macropore densities.

    PubMed

    Arora, Bhavna; Mohanty, Binayak P; McGuire, Jennifer T

    2011-04-01

    Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term"macropore density" is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165

  5. Estimation of Vegetation Aerodynamic Roughness of Natural Regions Using Frontal Area Density Determined from Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Crago, Richard

    1994-01-01

    Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.

  6. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results. PMID:15648621

  7. Modeled Salt Density for Nuclear Material Estimation in the Treatment of Spent Nuclear Fuel

    SciTech Connect

    DeeEarl Vaden; Robert. D. Mariani

    2010-09-01

    Spent metallic nuclear fuel is being treated in a pyrometallurgical process that includes electrorefining the uranium metal in molten eutectic LiCl-KCl as the supporting electrolyte. We report a model for determining the density of the molten salt. Inventory operations account for the net mass of salt and for the mass of actinides present. It was necessary to know the molten salt density but difficult to measure, and it was decided to model the salt density for the initial treatment operations. The model assumes that volumes are additive for the ideal molten salt solution as a starting point; subsequently a correction factor for the lanthanides and actinides was developed. After applying the correction factor, the percent difference between the net salt mass in the electrorefiner and the resulting modeled salt mass decreased from more than 4.0% to approximately 0.1%. As a result, there is no need to measure the salt density at 500 C for inventory operations; the model for the salt density is found to be accurate.

  8. Correlation for the estimation of the density of fatty acid esters fuels and its implications. A proposed Biodiesel Cetane Index.

    PubMed

    Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

    2010-09-01

    Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853

  9. Correlation for the estimation of the density of fatty acid esters fuels and its implications. A proposed Biodiesel Cetane Index.

    PubMed

    Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

    2010-09-01

    Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels.

  10. Inverse estimation of parameters for multidomain flow models in soil columns with different macropore densities

    PubMed Central

    Arora, Bhavna; Mohanty, Binayak P.; McGuire, Jennifer T.

    2013-01-01

    Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term“macropore density” is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165

  11. Estimation of density and population size and recommendations for monitoring trends of Bahama parrots on Great Abaco and Great Inagua

    USGS Publications Warehouse

    Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.

    2005-01-01

    Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.

  12. Estimating Population Density of the San Martin Titi Monkey (Callicebus oenanthe) in Peru Using Vocalisations.

    PubMed

    van Kuijk, Silvy M; García-Suikkanen, Carolina; Tello-Alvarado, Julio C; Vermeer, Jan; Hill, Catherine M

    2015-01-01

    We calculated the population density of the critically endangered Callicebus oenanthe in the Ojos de Agua Conservation Concession, a dry forest area in the department of San Martin, Peru. Results showed significant differences (p < 0.01) in group densities between forest boundaries (16.5 groups/km2, IQR = 21.1-11.0) and forest interior (4.0 groups/km2, IQR = 5.0-0.0), suggesting the 2,550-ha area harbours roughly 1,150 titi monkeys. This makes Ojos de Agua an important cornerstone in the conservation of the species, because it is one of the largest protected areas where the species occurs.

  13. On the Use of Bone Remodelling Models to Estimate the Density Distribution of Bones. Uniqueness of the Solution.

    PubMed

    Martínez-Reina, Javier; Ojeda, Joaquín; Mayo, Juana

    2016-01-01

    Bone remodelling models are widely used in a phenomenological manner to estimate numerically the distribution of apparent density in bones from the loads they are daily subjected to. These simulations start from an arbitrary initial distribution, usually homogeneous, and the density changes locally until a bone remodelling equilibrium is achieved. The bone response to mechanical stimulus is traditionally formulated with a mathematical relation that considers the existence of a range of stimulus, called dead or lazy zone, for which no net bone mass change occurs. Implementing a relation like that leads to different solutions depending on the starting density. The non-uniqueness of the solution has been shown in this paper using two different bone remodelling models: one isotropic and another anisotropic. It has also been shown that the problem of non-uniqueness is only mitigated by removing the dead zone, but it is not completely solved unless the bone formation and bone resorption rates are limited to certain maximum values.

  14. PeaKDEck: a kernel density estimator-based peak calling program for DNaseI-seq data.

    PubMed

    McCarthy, Michael T; O'Callaghan, Christopher A

    2014-05-01

    Hypersensitivity to DNaseI digestion is a hallmark of open chromatin, and DNaseI-seq allows the genome-wide identification of regions of open chromatin. Interpreting these data is challenging, largely because of inherent variation in signal-to-noise ratio between datasets. We have developed PeaKDEck, a peak calling program that distinguishes signal from noise by randomly sampling read densities and using kernel density estimation to generate a dataset-specific probability distribution of random background signal. PeaKDEck uses this probability distribution to select an appropriate read density threshold for peak calling in each dataset. We benchmark PeaKDEck using published ENCODE DNaseI-seq data and other peak calling programs, and demonstrate superior performance in low signal-to-noise ratio datasets. PMID:24407222

  15. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    PubMed

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368

  16. Neural networks for probabilistic environmental prediction: Conditional Density Estimation Network Creation and Evaluation (CaDENCE) in R

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2012-04-01

    A conditional density estimation network (CDEN) is a probabilistic extension of the standard multilayer perceptron neural network (MLP). A CDEN model allows users to estimate parameters of a specified probability density function conditioned upon values of a set of predictors using the MLP architecture. The result is a flexible model for the mean, the variance, exceedance probabilities, prediction intervals, etc. from the specified conditional distribution. Because the CDEN is based on the MLP, nonlinear relationships, including those involving complicated interactions between predictors, can be described by the modeling framework. CDEN models have been applied to a wide range of environmental prediction tasks, such as precipitation downscaling, extreme value analysis in hydrology, wind retrievals from satellites, and air quality forecasting. This paper describes the CaDENCE (Conditional Density Estimation Network Creation and Evaluation) package, which provides routines for creating and evaluating CDEN models in the R programming language. CaDENCE routines are demonstrated on a dataset consisting of suspended sediment concentrations and discharge measurements from the Fraser River at Hope, British Columbia, Canada.

  17. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    PubMed

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework.

  18. Estimation of the radial size and density fluctuation amplitude of edge localized modes using microwave interferometer array

    NASA Astrophysics Data System (ADS)

    Ayub, M. K.; Yun, G. S.; Leem, J.; Kim, M.; Lee, W.; Park, H. K.

    2016-03-01

    A novel technique to estimate the range of radial size and density fluctuation amplitude of edge localized modes (ELMs) in the KSTAR tokamak plasma is presented. A microwave imaging reflectometry (MIR) system is reconfigured as a multi-channel microwave interferometer array (MIA) to measure the density fluctuations associated with ELMs, while electron cyclotron emission imaging (ECEI) system is used as a reference diagnostics to confirm the MIA observation. Two dimensional full-wave (FWR2D) simulations integrated with optics simulation are performed to investigate the Gaussian beam propagation and reflection through the plasma as well as the MIA optical components and obtain the interferometric phase undulations of individual channels at the detector plane due to ELM perturbation. The simulation results show that the amplitude of the phase undulation depends linearly on both radial size and density perturbation amplitude of ELM. For a typical discharge with ELMs, it is estimated that the ELM structure observed by the MIA system has density perturbation amplitude in the range ~ 7 % to 14 % while radial size in the range ~ 1 to 3 cm.

  19. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error 1

    PubMed Central

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2011-01-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

  20. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...

  1. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation

    PubMed Central

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S. M. Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    Background In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having ‘random’, ‘aggregated’ and ‘regular’ spatial patterns) plant populations and empirical ones. Principal Findings PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N − 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N − 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N − 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. Significance If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all

  2. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    NASA Astrophysics Data System (ADS)

    Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.

    2016-03-01

    In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3

  3. Estimating Brownian motion dispersal rate, longevity and population density from spatially explicit mark-recapture data on tropical butterflies.

    PubMed

    Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J

    2012-07-01

    1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity.

  4. Improving the accuracy of risk prediction from particle-based breakthrough curves reconstructed with kernel density estimators

    NASA Astrophysics Data System (ADS)

    Siirila-Woodburn, Erica R.; Fernández-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-06-01

    While particle tracking techniques are often used in risk frameworks, the number of particles needed to properly derive risk metrics such as average concentration for a given exposure duration is often unknown. If too few particles are used, error may propagate into the risk estimate. In this work, we provide a less error-prone methodology for the direct reconstruction of exposure duration averaged concentration versus time breakthrough curves from particle arrival times at a compliance surface. The approach is based on obtaining a suboptimal kernel density estimator that is applied to the sampled particle arrival times. The corresponding estimates of risk metrics obtained with this method largely outperform those by means of traditional methods (reconstruction of the breakthrough curve followed by the integration of concentration in time over the exposure duration). This is particularly true when the number of particles used in the numerical simulation is small (<105), and for small exposure times. Percent error in the peak of averaged breakthrough curves is approximately zero for all scenarios and all methods tested when the number of particles is ≥105. Our results illustrate that obtaining a representative average exposure concentration is reliant on the information contained in each individual tracked particle, more so when the number of particles is small. They further illustrate the usefulness of defining problem-specific kernel density estimators to properly reconstruct the observables of interest in a particle tracking framework without relying on the use of an extremely large number of particles.

  5. An empirical model to estimate density of sodium hydroxide solution: An activator of geopolymer concretes

    NASA Astrophysics Data System (ADS)

    Rajamane, N. P.; Nataraja, M. C.; Jeyalakshmi, R.; Nithiyanantham, S.

    2016-02-01

    Geopolymer concrete is zero-Portland cement concrete containing alumino-silicate based inorganic polymer as binder. The polymer is obtained by chemical activation of alumina and silica bearing materials, blast furnace slag by highly alkaline solutions such as hydroxide and silicates of alkali metals. Sodium hydroxide solutions of different concentrations are commonly used in making GPC mixes. Often, it is seen that sodium hydroxide solution of very high concentration is diluted with water to obtain SHS of desired concentration. While doing so it was observed that the solute particles of NaOH in SHS tend to occupy lower volumes as the degree of dilution increases. This aspect is discussed in this paper. The observed phenomenon needs to be understood while formulating the GPC mixes since this influences considerably the relationship between concentration and density of SHS. This paper suggests an empirical formula to relate density of SHS directly to concentration expressed by w/w.

  6. Solar cycle variations of the Cluster spacecraft potential and its use for electron density estimations

    NASA Astrophysics Data System (ADS)

    Lybekk, B.; Pedersen, A.; Haaland, S.; Svenes, K.; Fazakerley, A. N.; Masson, A.; Taylor, M. G. G. T.; Trotignon, J.-G.

    2012-01-01

    A sunlit conductive spacecraft, immersed in tenuous plasma, will attain a positive potential relative to the ambient plasma. This potential is primarily governed by solar irradiation, which causes escape of photoelectrons from the surface of the spacecraft, and the electrons in the ambient plasma providing the return current. In this paper we combine potential measurements from the Cluster satellites with measurements of extreme ultraviolet radiation from the TIMED satellite to establish a relation between solar radiation and spacecraft charging from solar maximum to solar minimum. We then use this relation to derive an improved method for determination of the current balance of the spacecraft. By calibration with other instruments we thereafter derive the plasma density. The results show that this method can provide information about plasma densities in the polar cap and magnetotail lobe regions where other measurements have limitations.

  7. Estimating Population Density of the San Martin Titi Monkey (Callicebus oenanthe) in Peru Using Vocalisations.

    PubMed

    van Kuijk, Silvy M; García-Suikkanen, Carolina; Tello-Alvarado, Julio C; Vermeer, Jan; Hill, Catherine M

    2015-01-01

    We calculated the population density of the critically endangered Callicebus oenanthe in the Ojos de Agua Conservation Concession, a dry forest area in the department of San Martin, Peru. Results showed significant differences (p < 0.01) in group densities between forest boundaries (16.5 groups/km2, IQR = 21.1-11.0) and forest interior (4.0 groups/km2, IQR = 5.0-0.0), suggesting the 2,550-ha area harbours roughly 1,150 titi monkeys. This makes Ojos de Agua an important cornerstone in the conservation of the species, because it is one of the largest protected areas where the species occurs. PMID:26824671

  8. Comparison of volumetric breast density estimations from mammography and thorax CT

    NASA Astrophysics Data System (ADS)

    Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.

    2014-08-01

    Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.

  9. Carbon pool densities and a first estimate of the total carbon pool in the Mongolian forest-steppe.

    PubMed

    Dulamsuren, Choimaa; Klinge, Michael; Degener, Jan; Khishigjargal, Mookhor; Chenlemuge, Tselmeg; Bat-Enerel, Banzragch; Yeruult, Yolk; Saindovdon, Davaadorj; Ganbaatar, Kherlenchimeg; Tsogtbaatar, Jamsran; Leuschner, Christoph; Hauck, Markus

    2016-02-01

    The boreal forest biome represents one of the most important terrestrial carbon stores, which gave reason to intensive research on carbon stock densities. However, such an analysis does not yet exist for the southernmost Eurosiberian boreal forests in Inner Asia. Most of these forests are located in the Mongolian forest-steppe, which is largely dominated by Larix sibirica. We quantified the carbon stock density and total carbon pool of Mongolia's boreal forests and adjacent grasslands and draw conclusions on possible future change. Mean aboveground carbon stock density in the interior of L. sibirica forests was 66 Mg C ha(-1) , which is in the upper range of values reported from boreal forests and probably due to the comparably long growing season. The density of soil organic carbon (SOC, 108 Mg C ha(-1) ) and total belowground carbon density (149 Mg C ha(-1) ) are at the lower end of the range known from boreal forests, which might be the result of higher soil temperatures and a thinner permafrost layer than in the central and northern boreal forest belt. Land use effects are especially relevant at forest edges, where mean carbon stock density was 188 Mg C ha(-1) , compared with 215 Mg C ha(-1) in the forest interior. Carbon stock density in grasslands was 144 Mg C ha(-1) . Analysis of satellite imagery of the highly fragmented forest area in the forest-steppe zone showed that Mongolia's total boreal forest area is currently 73 818 km(2) , and 22% of this area refers to forest edges (defined as the first 30 m from the edge). The total forest carbon pool of Mongolia was estimated at ~ 1.5-1.7 Pg C, a value which is likely to decrease in future with increasing deforestation and fire frequency, and global warming.

  10. Estimation of Aerodynamic Roughness and Zero Plane Displacement Using Medium Density of Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Mohd Salleh, M. R.; Rahman, M. Z. Abdul; Abu Bakar, M. A.; Rasib, A. W.; Omar, H.

    2016-09-01

    This paper presents a framework to estimate aerodynamic roughness over specific height (zo/H) and zero plane displacement (d/H) over various landscapes in Kelantan State using airborne LiDAR data. The study begins with the filtering of airborne LiDAR, which produced ground and non-ground points. The ground points were used to generate digital terrain model (DTM) while the non-ground points were used for digital surface model (DSM) generation. Canopy height model (CHM) was generated by subtracting DTM from DSM. Individual trees in the study area were delineated by applying the Inverse Watershed segmentation method on the CHM. Forest structural parameters including tree height, height to crown base (HCB) and diameter at breast height (DBH) were estimated using existing allometric equations. The airborne LiDAR data was divided into smaller areas, which correspond to the size of the zo/H and d/H maps i.e. 50 m and 100 m. For each area individual tree were reconstructed based on the tree properties, which accounts overlapping between crowns and trunks. The individual tree models were used to estimate individual tree frontal area and the total frontal area over a specific ground surface. Finally, three roughness models were used to estimate zo/H and d/H for different wind directions, which were assumed from North/South and East/West directions. The results were shows good agreements with previous studies that based on the wind tunnel experiments.

  11. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  12. Fiber density estimation from single q-shell diffusion imaging by tensor divergence.

    PubMed

    Reisert, Marco; Mader, Irina; Umarova, Roza; Maier, Simon; Tebartz van Elst, Ludger; Kiselev, Valerij G

    2013-08-15

    Diffusion-weighted magnetic resonance imaging provides information about the nerve fiber bundle geometry of the human brain. While the inference of the underlying fiber bundle orientation only requires single q-shell measurements, the absolute determination of their volume fractions is much more challenging with respect to measurement techniques and analysis. Unfortunately, the usually employed multi-compartment models cannot be applied to single q-shell measurements, because the compartment's diffusivities cannot be resolved. This work proposes an equation for fiber orientation densities that can infer the absolute fraction up to a global factor. This equation, which is inspired by the classical mass preservation law in fluid dynamics, expresses the fiber conservation associated with the assumption that fibers do not terminate in white matter. Simulations on synthetic phantoms show that the approach is able to derive the densities correctly for various configurations. Experiments with a pseudo ground truth phantom show that even for complex, brain-like geometries the method is able to infer the densities correctly. In-vivo results with 81 healthy volunteers are plausible and consistent. A group analysis with respect to age and gender show significant differences, such that the proposed maps can be used as a quantitative measure for group and longitudinal analysis.

  13. Estimation of Neutral Density in Edge Plasma with Double Null Configuration in EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Xu, Guosheng; Ding, Siye; Gao, Wei; Wu, Zhenwei; Chen, Yingjie; Huang, Juan; Liu, Xiaoju; Zang, Qing; Chang, Jiafeng; Zhang, Wei; Li, Yingying; Qian, Jinping

    2011-08-01

    In this work, population coefficients of hydrogen's n = 3 excited state from the hydrogen collisional-radiative (CR) model, from the data file of DEGAS 2, are used to calculate the photon emissivity coefficients (PECs) of hydrogen Balmer-α (n = 3 → n = 2) (Hα). The results are compared with the PECs from Atomic Data and Analysis Structure (ADAS) database, and a good agreement is found. A magnetic surface-averaged neutral density profile of typical double-null (DN) plasma in EAST is obtained by using FRANTIC, the 1.5-D fluid transport code. It is found that the sum of integral Dα and Hα emission intensity calculated via the neutral density agrees with the measured results obtained by using the absolutely calibrated multi-channel poloidal photodiode array systems viewing the lower divertor at the last closed flux surface (LCFS). It is revealed that the typical magnetic surface-averaged neutral density at LCFS is about 3.5 × 1016 m-3.

  14. Estimating the density of intermediate size KBOs from considerations of volatile retention

    NASA Astrophysics Data System (ADS)

    Levi, Amit; Podolak, Morris

    2011-07-01

    By using a hydrodynamic atmospheric escape mechanism (Levi, A., Podolak, M. [2009]. Icarus 202, 681-693) we show how the unusually high mass density of Quaoar could have been predicted (constrained), without any knowledge of a binary companion. We suggest an explanation of the recent spectroscopic observations of Orcus and Charon [Delsanti, A., Merlin, F., Guilbert, A., Bauer, J., Yang, B., Meech, K.J., 2010. Astron. Astrophys. 520, A40; Cook, J.C., Desch, S.J., Roush, T.L., Trujillo, C.A., Geba